Longstaff-Schwartz on GPU

Joined
11/22/07
Messages
10
Points
13
Anyone benchmark their implementations of Longstaff-Schwartz for a vanilla American option? I'm looking at doing a NVidia CUDA implementation but I am curious how long the run-times are for a standard C++ CPU implementation.

FYI, I've been getting 50x speed-ups for Asian options.
 
I'm looking at computing NVidia Cuda 1.1 an C++, too.

How do you get 50x Speedup for asian Options?
 
I'm looking at computing NVidia Cuda 1.1 an C++, too.

How do you get 50x Speedup for asian Options?

Simulating asian options is quite easy considering Nvidia has already implemented a Monte-Carlo based option pricer. You extend the number of random number draws from a single period to multi-period: for example, instead of N scenarios its now M periods x N scenarios.

Now you map each CUDA thread to process one scenario: start from the first day compute the new price of each successive day while accumulating the running average. When you are done accumulating the total prices for each scenario, you can compute average. Averages can be computed either on the GPU or CPU. Believe it or not, depending on the quantity of numbers you are summing, the CPU can be much faster than GPU.

Also, be careful not to blow your memory out (use more than whats physically available) on the GPU card. Unlike a traditional CPU system, invalid memory accesses does not crashes in CUDA; however, you get a random number back. :-ss
 
Back
Top Bottom