• C++ Programming for Financial Engineering
    Highly recommended by thousands of MFE students. Covers essential C++ topics with applications to financial engineering. Learn more Join!
    Python for Finance with Intro to Data Science
    Gain practical understanding of Python to read, understand, and write professional Python code for your first day on the job. Learn more Join!
    An Intuition-Based Options Primer for FE
    Ideal for entry level positions interviews and graduate studies, specializing in options trading arbitrage and options valuation models. Learn more Join!

Favourite Finite Difference Methods book(s)?

Joined
3/7/10
Messages
69
Points
18
What are you favourite FDM books? I'm interested in more advanced texts primarily.

I enjoyed Duffy's "Financial Instrument Pricing using C++" but haven't had a look at his FDM book yet.
 
Thanks for the kind words. At some stage (maybe 20**) I plan to do a second edition, with all the schemes in C++ and C# as well as new FDM schemes have been tested.
In particular, a good look is the thesis by Sheppard on my site as well as what I am working on in high-factor PDEs using ADE. I have found that actually programming the schemes gives a real understanding of the algos. And it's fun when you go to parallel programming.

http://www.datasimfinancial.com/forum/viewtopic.php?t=289


Basically, the idea is to have explicit, stable schemes which are easily parallelised. I already have a POC in 3d using boost::multi_array as datastructure. For 4d, 5d I need a 64-bit OS and more memory.

Daniel

//
Roelof's thesis --> http://www.datasimfinancial.com/forum/viewtopic.php?t=101

For FDM (PDE) I recommend Thomas, Numerical PDE, Springer; it gives all the basics and more.
 
Thank you both for the recommendations. I'll be checking out your FDM book as well as Thomas'.

The explicit, stable schemes you mention - they may be interesting candidates for GPU speedup if they are easily parallelised?
 
The explicit, stable schemes you mention - they may be interesting candidates for GPU speedup if they are easily parallelised?

If you see section 5 of my ADE article I have used OpenMP to get a speedup of 1.6 on a two-core machine.

I am testing 3-factor basket options and am getting response time of 78 seconds for a 100^4 mesh. It's a starter. I use boost::multi_array for the tensor class.

See discussion on this issue ---> http://www.wilmott.com/messageview.cfm?catid=34&threadid=75485


We have some tentative results for GPU and will report soon... The code looks very similar to the bespoke OpenMP code.


//
ADE is a so-called additive operator scheme (AOS) which means the solution is the sum of two parallel sweeps U and V. ADI and Soviet Splitting are MOS (multiplicative) and essentiallly sequential, especially with LU.

1 and 2 factor PDE are kind of small, so parallel does not always give speedup (start/stop of thread is costly)

D<!-- / message --><!-- sig -->
 
Since I am quite new to GPU architecture, and you mention the overhead, can you elaborate on how much overhead there is with tranfer of data (i.e. I/O bottlenecking) to/from the card?

For instance, in the PDE situation, can all the initial data be transferred at the start of the calculation leaving only the final result to be transferred at the end - or is there a need to continuously communicate with the GPU?

My initial impression is that since a GPU does not have virtual memory, then the video RAM would need to be fairly large to store any significant data!
 
Back
Top