Home
Forums
New posts
Search forums
Online Courses
2022 Rankings
2022 MFE Programs Rankings Methodology
Reviews
Latest reviews
Search resources
Tracker
What's new
New posts
New media
New media comments
New resources
New profile posts
Latest activity
Log in
Register
What's new
Search
Search
Search titles only
By:
New posts
Search forums
Menu
Log in
Register
Install the app
Install
C++ Programming for Financial Engineering
Highly recommended by thousands of MFE students. Covers essential C++ topics with applications to financial engineering.
Learn more
Join!
Python for Finance with Intro to Data Science
Gain practical understanding of Python to read, understand, and write professional Python code for your first day on the job.
Learn more
Join!
An Intuition-Based Options Primer for FE
Ideal for entry level positions interviews and graduate studies, specializing in options trading arbitrage and options valuation models.
Learn more
Join!
Home
Forums
Quant discussion
Computing
Nvidia - Cuda Toolkit for options pricing
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="cgorac" data-source="post: 16593" data-attributes="member: 1689"><p>Andy,</p><p></p><p>I was working as an external HPC consultant on this particular CUDA based options pricing project, so at the moment I'm still bound with various NDAs etc. Now, I can certainly provide you with contacts on the PM, if you would like to discuss the details with these guys (probably be prepared for some amount of marketing talk, though); but let me also state my experience regarding the performance improvement this way: the code I wrote made it possible (for both Monte-Carlo and PDE solvers, later implemented for European and American options only) to achieve speed-ups close to the ratio of number of GPU cores and CPU cores, multiplied with the ratio of GPU frequency with CPU frequency. Which means, if I run the pricing code on Tesla C1060, with 240 single-precision units and running on 1.3GHz, then the code is approximately 30 times faster than SSE based (which means - 4 single-precision units used) CPU code on the same machine, with CPU running on 2.5GHz. Now, I know it is somewhat like comparing apples and oranges here, but the speed-up of 30x is what counts after all; and note on the other side that the CPU code had to be heavily tweaked to get to some fair comparison - GPU code was like two orders of magnitude faster in my initial testings (what I'm trying to say is: yes, it takes lots of effort to get to really good GPU code, and thus utilize GPU to its max potential, but on the other side these days it is equally hard, if not even harder, to write good CPU code). As I was able to witness similar performance improvements on some of my other, quantitative finances unrelated, projects (some numerical algorithms map really well onto GPU, some others admittedly do not), I'm more than convinced that for large part of computing intensive calculations in finance GPUs could provide tremendous speed-ups. However, the speed-ups in core computations have certainly to be estimated within the context of your complete application (well known Amdahl and Gustafson-Barsis laws could help in estimating overall speed-up possible).</p><p></p><p>As far as the initial investment involved - again, it really depends, there are so many scenarios possible, and it's hard to tell. But, if you are into options pricing, then I guess I could state with some degree of certainty, that if you could bring some CUDA knowledgeable guy(s) on board, then in 4-5 man months you should be able to build basics of an options pricing engine that would prove the concept and let you estimate is it worth to proceed with CUDA or not. The software tools needed are mostly free, and as far as hardware investment concerned, you could probably start with one multi-GPU machine for testing (see <a href="http://www.nvidia.com/object/tesla_supercomputer_wtb.html" target="_blank">here</a> for some options, and on the other side developers could work on ordinary machines (I was doing all of my development on higher-end Lenovo/HP laptops, equipped with Quadro Mobile solutions). Add to this an estimation of how long it would take to you to detect bottlenecks in your existing engine, and/or adapt this code so that pricing algorithms are pluggable (so that you could switch back and forth between your existing CPU code, and eventually newly written GPU code), and I think you should be pretty close to an estimation of the initial cost.</p></blockquote><p></p>
[QUOTE="cgorac, post: 16593, member: 1689"] Andy, I was working as an external HPC consultant on this particular CUDA based options pricing project, so at the moment I'm still bound with various NDAs etc. Now, I can certainly provide you with contacts on the PM, if you would like to discuss the details with these guys (probably be prepared for some amount of marketing talk, though); but let me also state my experience regarding the performance improvement this way: the code I wrote made it possible (for both Monte-Carlo and PDE solvers, later implemented for European and American options only) to achieve speed-ups close to the ratio of number of GPU cores and CPU cores, multiplied with the ratio of GPU frequency with CPU frequency. Which means, if I run the pricing code on Tesla C1060, with 240 single-precision units and running on 1.3GHz, then the code is approximately 30 times faster than SSE based (which means - 4 single-precision units used) CPU code on the same machine, with CPU running on 2.5GHz. Now, I know it is somewhat like comparing apples and oranges here, but the speed-up of 30x is what counts after all; and note on the other side that the CPU code had to be heavily tweaked to get to some fair comparison - GPU code was like two orders of magnitude faster in my initial testings (what I'm trying to say is: yes, it takes lots of effort to get to really good GPU code, and thus utilize GPU to its max potential, but on the other side these days it is equally hard, if not even harder, to write good CPU code). As I was able to witness similar performance improvements on some of my other, quantitative finances unrelated, projects (some numerical algorithms map really well onto GPU, some others admittedly do not), I'm more than convinced that for large part of computing intensive calculations in finance GPUs could provide tremendous speed-ups. However, the speed-ups in core computations have certainly to be estimated within the context of your complete application (well known Amdahl and Gustafson-Barsis laws could help in estimating overall speed-up possible). As far as the initial investment involved - again, it really depends, there are so many scenarios possible, and it's hard to tell. But, if you are into options pricing, then I guess I could state with some degree of certainty, that if you could bring some CUDA knowledgeable guy(s) on board, then in 4-5 man months you should be able to build basics of an options pricing engine that would prove the concept and let you estimate is it worth to proceed with CUDA or not. The software tools needed are mostly free, and as far as hardware investment concerned, you could probably start with one multi-GPU machine for testing (see [URL="http://www.nvidia.com/object/tesla_supercomputer_wtb.html"]here[/URL] for some options, and on the other side developers could work on ordinary machines (I was doing all of my development on higher-end Lenovo/HP laptops, equipped with Quadro Mobile solutions). Add to this an estimation of how long it would take to you to detect bottlenecks in your existing engine, and/or adapt this code so that pricing algorithms are pluggable (so that you could switch back and forth between your existing CPU code, and eventually newly written GPU code), and I think you should be pretty close to an estimation of the initial cost. [/QUOTE]
Verification
Post reply
Home
Forums
Quant discussion
Computing
Nvidia - Cuda Toolkit for options pricing
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.
Accept
Learn more…
Top