- Joined
- 5/21/09
- Messages
- 13
- Points
- 11
Hi,
I am asked to optimize an option pricing algorithm in Cuda GPU and vectorized the algorithms as much as possible. I am a novice and just wondering if anybody here knows any resources that I can read and study?
In my first experiment, I tried to optimize a loop in random variable generation because it took a lots of time. However, I found out the GPU results were significantly slower. I suspected that was caused by the fact that I had to copy four long vectors to GPU and then back to CPU.
Since the same option pricing algorithm has to be done for million times, I am thinking maybe it would be a better idea to move the entire option algorithm into GPU rather than just a loop. The GPU will create the intermediate arrays and I will only need to copy the results back to CPU.
What do you guys think?
I am asked to optimize an option pricing algorithm in Cuda GPU and vectorized the algorithms as much as possible. I am a novice and just wondering if anybody here knows any resources that I can read and study?
In my first experiment, I tried to optimize a loop in random variable generation because it took a lots of time. However, I found out the GPU results were significantly slower. I suspected that was caused by the fact that I had to copy four long vectors to GPU and then back to CPU.
Since the same option pricing algorithm has to be done for million times, I am thinking maybe it would be a better idea to move the entire option algorithm into GPU rather than just a loop. The GPU will create the intermediate arrays and I will only need to copy the results back to CPU.
What do you guys think?