Home
Forums
New posts
Search forums
Online Courses
2022 Rankings
2022 MFE Programs Rankings Methodology
Reviews
Latest reviews
Search resources
Tracker
What's new
New posts
New media
New media comments
New resources
New profile posts
Latest activity
Log in
Register
What's new
Search
Search
Search titles only
By:
New posts
Search forums
Menu
Log in
Register
Install the app
Install
C++ Programming for Financial Engineering
Highly recommended by thousands of MFE students. Covers essential C++ topics with applications to financial engineering.
Learn more
Join!
Python for Finance with Intro to Data Science
Gain practical understanding of Python to read, understand, and write professional Python code for your first day on the job.
Learn more
Join!
An Intuition-Based Options Primer for FE
Ideal for entry level positions interviews and graduate studies, specializing in options trading arbitrage and options valuation models.
Learn more
Join!
Home
Forums
Quant discussion
Computing
Nvidia - Cuda Toolkit for options pricing
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="cgorac" data-source="post: 16591" data-attributes="member: 1689"><p>I too was recently heavily involved in writing pricing code in CUDA, and was also doing lots of HPC work on other platforms in past couple years, so here are some of opinions of mine:</p><p></p><p>I'd say that going with NVIDIA hardware, and CUDA API, is still the safest option for anyone looking for speed-up. As mentioned above, IBM Cell is dead (and it was ugly as hell for programming). Larabee is going to be heavily delayed (for at least a year too), and overall its future is questionable too (although Intel guys seem like changing focus with Larabee from GPU to HPC accelerator). As for OpenCL (also mentioned in some posts) - I had an opportunity for doing lots of work in OpenCL recently, and this thing is crap at the moment. The programming model is OK, rather similar to CUDA, but drivers/tools implementation is awful. With NVIDIA OpenCL drivers, you get several times slower performance that for the same CUDA code; with AMD the performance is also very hard to get (AMD guys have some very fast hardware in their offerings, but overall they just still seem like unconvinced in using GPU for HPC, so the effort they put in is far behind what NVIDIA is doing). So overall, the state of the OpenCL is same as the state of CUDA approximately 2.5 years ago: it's certainly going to improve, but I see no reason to waste time in waiting for it, especially as it appears that write-OpenCL-once-then-run-everywhere is myth - you just have to tweak for each platform separately, so this really has no advantage over just deciding for NVIDIA hardware and sticking with CUDA.</p><p></p><p>There exist many other efforts to provide higher-level paradigm for the GPU/accelerator programming. One example of this are Matlab plugins, like above mentioned <a href="http://www.accelereyes.com" target="_blank">Jacket</a> from Accelereyes (there are others, like <a href="http://gp-you.org" target="_blank">GPUmat</a>); was involved both in the implementation, and usage of something alike, and I'd say these won't fly either: it's very hard to match Matlab routines in semantic and numerical precision, and still keep GPU utilized efficiently. Furthermore, there is lots of work in trying to provide extensions for general purpose languages that would semi-automatically parallelize given sequences of code for the execution on the accelerator (GPU or other type). For example, recent release of Portland Group suite of compilers is offering something <a href="http://www.pgroup.com/resources/accel.htm" target="_blank">alike</a> for Fortran (although I didn't liked it - too much OpenMP-like stuff to me; on the other side, I really liked an <a href="http://www.pgroup.com/resources/cudafortran.htm" target="_blank">alternative</a> capability offered by the same release of their compiler tools, and that is to write CUDA kernels in Fortran, together with having all of CUDA runtime functions available through nice native Fortran syntax). For C++, RapidMind was providing an automated translation platform for C++, and it was rather mature (if I remember it correctly, they supported all of multi-core CPU, GPU, and Cell), but they are recently acquired by Intel, so I'd expect soon-to-be-released-in-beta <a href="http://www.intel.com/go/Ct" target="_blank">Intel Ct</a> platform to be much like this, so it may be interesting to take a look into.</p><p></p><p>As far as FPGA concerned, I wouldn't agree with DailyVaR - I think there is lots of potential in FPGA, especially regarding recent C-to-FPGA developments. <a href="http://www.impulseaccelerated.com" target="_blank">Impulse C</a> offering is very mature - I experimented with it to the some extent, and while the programming model is certainly even more complicated than for GPUs, the effort could be definitely worthwhile regarding overall speed-up potentials. Also, there exist other vendors starting to offering this kind of tools (like <a href="http://www.mitrionics.com" target="_blank">Mitrionics</a>), so I'd expect this field to quickly mature into viable alternative to using GPUs.</p><p></p><p>So - overall, lots of very interesting development ongoing, but the problem is that programming models are far from being standardized, and it's hard to know which one will eventually win out as de-facto standard. Still, considerable speed-ups (thus competitive advantage over competitors too) could be achieved by employing accelerators even today, so I'd say investing in that kind of development is must already; and I'd also re-state that at going with NVIDIA hardware and CUDA API is pretty safe bet at the moment: hardware is fast and improving (Fermi is going to bring some really nice improvements), software stack is mature and stable, and there also exist considerable pool of people knowledgeable in CUDA to hire from.</p></blockquote><p></p>
[QUOTE="cgorac, post: 16591, member: 1689"] I too was recently heavily involved in writing pricing code in CUDA, and was also doing lots of HPC work on other platforms in past couple years, so here are some of opinions of mine: I'd say that going with NVIDIA hardware, and CUDA API, is still the safest option for anyone looking for speed-up. As mentioned above, IBM Cell is dead (and it was ugly as hell for programming). Larabee is going to be heavily delayed (for at least a year too), and overall its future is questionable too (although Intel guys seem like changing focus with Larabee from GPU to HPC accelerator). As for OpenCL (also mentioned in some posts) - I had an opportunity for doing lots of work in OpenCL recently, and this thing is crap at the moment. The programming model is OK, rather similar to CUDA, but drivers/tools implementation is awful. With NVIDIA OpenCL drivers, you get several times slower performance that for the same CUDA code; with AMD the performance is also very hard to get (AMD guys have some very fast hardware in their offerings, but overall they just still seem like unconvinced in using GPU for HPC, so the effort they put in is far behind what NVIDIA is doing). So overall, the state of the OpenCL is same as the state of CUDA approximately 2.5 years ago: it's certainly going to improve, but I see no reason to waste time in waiting for it, especially as it appears that write-OpenCL-once-then-run-everywhere is myth - you just have to tweak for each platform separately, so this really has no advantage over just deciding for NVIDIA hardware and sticking with CUDA. There exist many other efforts to provide higher-level paradigm for the GPU/accelerator programming. One example of this are Matlab plugins, like above mentioned [URL="http://www.accelereyes.com"]Jacket[/URL] from Accelereyes (there are others, like [URL="http://gp-you.org"]GPUmat[/URL]); was involved both in the implementation, and usage of something alike, and I'd say these won't fly either: it's very hard to match Matlab routines in semantic and numerical precision, and still keep GPU utilized efficiently. Furthermore, there is lots of work in trying to provide extensions for general purpose languages that would semi-automatically parallelize given sequences of code for the execution on the accelerator (GPU or other type). For example, recent release of Portland Group suite of compilers is offering something [URL="http://www.pgroup.com/resources/accel.htm"]alike[/URL] for Fortran (although I didn't liked it - too much OpenMP-like stuff to me; on the other side, I really liked an [URL="http://www.pgroup.com/resources/cudafortran.htm"]alternative[/URL] capability offered by the same release of their compiler tools, and that is to write CUDA kernels in Fortran, together with having all of CUDA runtime functions available through nice native Fortran syntax). For C++, RapidMind was providing an automated translation platform for C++, and it was rather mature (if I remember it correctly, they supported all of multi-core CPU, GPU, and Cell), but they are recently acquired by Intel, so I'd expect soon-to-be-released-in-beta [URL="http://www.intel.com/go/Ct"]Intel Ct[/URL] platform to be much like this, so it may be interesting to take a look into. As far as FPGA concerned, I wouldn't agree with DailyVaR - I think there is lots of potential in FPGA, especially regarding recent C-to-FPGA developments. [URL="http://www.impulseaccelerated.com"]Impulse C[/URL] offering is very mature - I experimented with it to the some extent, and while the programming model is certainly even more complicated than for GPUs, the effort could be definitely worthwhile regarding overall speed-up potentials. Also, there exist other vendors starting to offering this kind of tools (like [URL="http://www.mitrionics.com"]Mitrionics[/URL]), so I'd expect this field to quickly mature into viable alternative to using GPUs. So - overall, lots of very interesting development ongoing, but the problem is that programming models are far from being standardized, and it's hard to know which one will eventually win out as de-facto standard. Still, considerable speed-ups (thus competitive advantage over competitors too) could be achieved by employing accelerators even today, so I'd say investing in that kind of development is must already; and I'd also re-state that at going with NVIDIA hardware and CUDA API is pretty safe bet at the moment: hardware is fast and improving (Fermi is going to bring some really nice improvements), software stack is mature and stable, and there also exist considerable pool of people knowledgeable in CUDA to hire from. [/QUOTE]
Verification
Post reply
Home
Forums
Quant discussion
Computing
Nvidia - Cuda Toolkit for options pricing
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.
Accept
Learn more…
Top