• C++ Programming for Financial Engineering
    Highly recommended by thousands of MFE students. Covers essential C++ topics with applications to financial engineering. Learn more Join!
    Python for Finance with Intro to Data Science
    Gain practical understanding of Python to read, understand, and write professional Python code for your first day on the job. Learn more Join!
    An Intuition-Based Options Primer for FE
    Ideal for entry level positions interviews and graduate studies, specializing in options trading arbitrage and options valuation models. Learn more Join!

Modern Computational Finance book

Joined
12/6/18
Messages
26
Points
13
I wanted to let the community known about the book Modern Computational Finance, which I published last month with Wiley. In order to limit repetition, I refer to my post on Medium for the story:
Modern Computational Finance: AAD and Parallel Simulations by Antoine Savine
27712

Antoine Savine
 
I downloaded, inspected and ran the github code. I have a number of initial questions to make sure I am on the same wavelength. In particular, I never use adjectives in my work because they are subjective and context-sensitive:

  1. What is the 'final' software product?
  2. What is 'Modern C++'? (BTW that C++17 'constexpr' is incongruous and it's the shakiest syntax in C++11). The code I see is C++03 with a strong bias to C. To be honest, it has a 90s feel for me (disclaimer: I began C++ in 1987 and have seen many students along the way). In fairness, maybe the github code is not indicative.
  3. Can you make 'professional implementation' more precise? You probably mean quality.
  4. What's the rationale for C code Microsoft XLL Kit in the github offering?
Good definitions are important. I am not sure how to respond if these are not agreed on by all.

It is also good that discussion flow in two directions. I do have a number of suggestions to possibly help as new improvements going forward if you wish. A shameless self-promotion of my own :D is my (recent) 2nd edition Wiley book for computational finance in which C++11 syntax etc. is done in excruciating detail.



Quantnet and Baruch offer C++ and Advanced C++11 and C++14 courses and many members here will welcome advances in computational finance and certainly C++. Without being too bombastic about it, it is a kind of gold standard. There is a wonderful and knowledgeable C++ community here.

 
Last edited:
Hello Daniel, thank you for your comments, let me try to address your questions:

What is the 'final' software product?

This is not a software, but the companion code for my book Modern Computational Finance book with Wiley.

It looks like both of us published books with Wiley around the same time with similar titles but, I think, different contents. The principal focus of my book is adjoint differentiation (AD) and its automatic implementation (AAD). The book also teaches generic Monte-Carlo simulations (meaning modular in terms of models and products) with a strong focus on a parallel (multi-threaded) implementation, and instrumentation with AAD. My book teaches the implementation of all of these in C++, but it is not a book to learn C++, modern or otherwise. It is a book on AAD and Monte-Carlo, with implementation in C++.

This being said, the files AAD.h (with dependencies on gaussians.h and blocklist.h) form a self contained AAD framework, applicable not only for financial simulations but also FDM and other numerical implementations, as well calibration, back-propagation in deep learning and whatever application where many differentials of scalar results must be computed quickly. The files MC.h (with dependencies) constitute the skeleton of a generic (modular) Monte-Carlo framework, which can accommodate a wide range of models and products.

What is 'Modern C++'?

You are right, I should really specify what I mean since this word is thrown around a lot. What I mean is that the implementation strongly relies on some patterns, constructs and libraries from C++11 (threading, lambdas, move semantics...) and (much less) from C++ 14/17 like constexpr (which I happen to like a lot, I would be very happy to discuss C++ constructs with you privately, at your convenience).

What I don't do is systematically apply recent or fashionable design patterns. The focus is on code that runs fast (a critical requirement with the massive regulatory calculations demanded of banks) and may be easily reused, extended and modified (another key requirement in a fast changing environment). Under these constraints, I tried to make the code as easy to understand as possible by most quants, risk managers, developers, derivatives professionals and students, and so avoid elegant but complicated code unless necessary.

Finally, the design of the Monte-Carlo library is object oriented, which is maybe why it has this nostalgic 90s feel. I believe that OOD is still well suited to Monte-Carlo libraries, and I explain exactly why in the book.

Can you make 'professional implementation' more precise?

The wording 'professional implementation' is not a judgement about quality. What I mean is that the code in the book is implemented similarly to how we do it in banks, as opposed to a demonstration code just for the purpose of illustrating the concepts in the book.

For example, the code of my introductory presentation to AAD in machine learning and finance is not professional. This is demonstration code, simplified to the extreme, and biased towards C so it is understandable by anyone with minimum programming experience. The code for the presentation is gathered in a file named toyCode.h.

On the contrary, the code in the book is professional in the sense that it reflects the production code implemented in financial institutions (in banks anyway, I never worked for a fund but I do have 23 years experience in French, American, Japanese and Danish banks).

Why the Microsoft XLL kit?

This kit is necessary to export C++ code to Excel. For convenience, the code is exported to Excel so readers can easily and immediately replicate the results of the book or make calculations of their own.

In addition, I am often asked how to connect C++ with Excel. I ended up writing a tutorial here, which I made part of the GitHub repo because I did not want to include it in the book, where it would feel out of place. The tutorial is based on Microsoft's kit, it is meant for an audience who wants to quickly learn a recipe to export code to Excel and doesn't need to depth or detail of a complete reference like the (excellent) Dalton, also with Wiley.

Finally, my book is also the curriculum for my Computational Finance classes at Copenhagen University, where students are asked to produce XLLs for their hand-ins (again, because this is very convenient and because this is how it works in banks).


I do hope I clarified your concerns and I am happy to discuss more at your convenience, preferably on the phone or over coffee. I have every intention to read your book, and I would be honored if you read and reviewed mine, again, keeping in mind that my book does not teach C++, but the 'modern' algorithms and techniques recently introduced to finance, together with a professional implementation (now these terms are clarified) .

Congratulations and thank you for the good job training professional C++ coders for the financial industry.

Kind regards,

Antoine Savine
 
That's clear now, thanks, Antoine, also the term 'professional' which could be called 'production' I suppose. That's fine. This encompasses the constraints and use cases in software projects. I appreciate that real life demands being fast.. One aspect is that software is also a product which costs money to maintain but this is probably outside the scope here. Some products have long lifetimes (5-30 years). This has an influence on how one perceives software. (in life we strive 'get it working, then get it right, then and only then get it optimised' :))

I like the template code for AD and Expressions (the latter being similar in spirit to stuff in my Boost I book). Maybe it could be used by students here for the Baruch advanced C++ where we have a defined architecture for applications (e.g. Monte Carlo). What would be nice is a plug-in AD component to compute the greeks? A narrow interface using C++11 std::function is good IMO.

Personally I am very interested in the maths in your book and then the (pseudo-code) algorithms that can be mapped to C++11.

I think it would be a good idea to use your book as foundation for students to learn more about AD (and ML, whatever that is:)). BTW, I am doing some work on the Complex Step Method (which is a well-kept secret but is used in practice).


Section 2.4 there discusses the connection with AD.

Some random thoughts:

  1. Writing thread-safe, scalable(!) and speedy-up multi-threaded code is very difficult and I tend to avoid it myself (I don't work in real-time systems). Second, threads are applicable to asynchronous events and timing constraints and less so for compute-intense parallel processing. In that case tasking is better and more scalable and coarse-grained. IMO this is the model going forward (see libraries PPL and TBB). I really like those libraries.
  2. I use Boost and open-source libraries in as far as possible. A bit like Fortran mindset. I suppose I am a Lego programmer!
  3. C++17 has a lot of large-grained concurrency features coming down the track AFAIR.
  4. I used XLL long time ago (not my cup of tea to be honest). These days Excel-DNA is extremely popular (C#,F#) but also C++/CLI which speaks native C++ and NET>
  5. Smart pointers rulez?
A bit of a mouthful for a Sunday morning. I hope something was useful.
 
Last edited:
Thank you Daniel. I will be looking forward to working with you on incorporating AAD material. Kind regards.
Antoine
 
You're welcome, Antoine. Just a small remark. Call by reference of vectors might help performance. Saves memory thrashing?

C++:
void toyDupireBarrierMcRisks(
    const double S0, const vector<double> spots, const vector<double> times, const matrix<double> vols,
    const double maturity, const double strike, const double barrier,
    const int Np, const int Nt, const double epsilon, RNG& random,
 
Of course! Thank you Daniel.
This is part of the toy code, where I don’t worry about performance. In the ‘professional’ code I should never pass collections by value (unless I may still have mistakes left...)
This being said, the inputs are marked const so the compiler should catch and optimise easily.
Kind regards.
Antoine
 
One use case I am interested is using the book to maps its generic (I assume) AD algorithms to compute sensitivities to open-source C++ and C# libraries.
 
The AAD framework is generic in the sense that it is reusable for many problems without modification of the AAD code. It is also non-invasive in the sense that the calculations in the instrumented code are unchanged.
But it is invasive in the sense that the calculation code must be templated for the real number type and called with the custom Number type defined in the framework. Contrarily to differentiation by finite differences, which only needs some calculation executable programmed in whatever language to call repeatedly, AAD works through a calculation graph built with operator overloading. It follows that the whole calculation code must be written in C++ and templated (solutions exist when you must call external executables as part the calculation, briefly described in the book). In addition, to implement AAD truly efficiently is more involved than just replacing all doubles by Numbers. In return for these contraints and hard work, AAD provides many sensitivities with spectacular speed and accuracy. Details of all this is what the book is about.
If you want, I can walk you through it by differentiating some simple code (to start with) in an open-source C++ library. I have two examples (Black-Scholes analytic and simplified Dupire Monte-Carlo) in the slides (although it uses a toy AAD framework, not the framework from the book / GitHub, it is very similar from the point of view of the client code). The code from the slides is also copied on toyCode.h on GitHub and exported to Excel for convenience.
 
Last edited:
Interesting book, thanks for sharing.

There is a bug in your medium blog post on the url of the book: the first part of the url points to your github project.
 
Interesting book, thanks for sharing.

There is a bug in your medium blog post on the url of the book: the first part of the url points to your github project.
Thank you for your kind comments. I could not find the bug on the Medium post, please could you point it more specifically?
 
It is after the sentence "Read them here". the http part of the link points to the github project;
 
Oh thank you for flagging this. It is fixed now. BTW looking at your book, I noticed that you did not release an ebook due to the difficulty of rendering equations correctly. I myself had the same problem: Amazon released a Kindle version with unreadable equations, but then, Wiley found a way to fix it and the Kindle version now has decently rendered equations. You may want to have a look.
 
Thanks for the tip for a Kindle version.
In your book, chapter 13, p. 458, you use a cubic as smoothstep interpolant. But in the context of Dupire, isn't the continuity of the second derivative important as well? Why don't you use the quintic version (as suggested by Ken Perlin in the computer graphics community)? Are you only bumping in time (and constant across all strikes)?
 
Thanks for the tip for a Kindle version.
In your book, chapter 13, p. 458, you use a cubic as smoothstep interpolant. But in the context of Dupire, isn't the continuity of the second derivative important as well? Why don't you use the quintic version (as suggested by Ken Perlin in the computer graphics community)? Are you only bumping in time (and constant across all strikes)?
Do you mean piecewise cubic spline interpolation? In that case the second derivative is continuous. (I don't have the book to hand so my remark may be redundant.)
 
Last edited:
H
Do you mean piecewise cubic spline interpolation? In that case the second derivative is continuous. (I don't have the book to hand so my remark may be redundant.)
No, the smoothstep is like a piecewise hermite cubic with f'(x_i)=0, and f''(x_i) is thus discontinuous. In this case f'(x_i) = 0 is sort of reasonable since we are interpolating a bump.
 
Thanks for the tip for a Kindle version.
In your book, chapter 13, p. 458, you use a cubic as smoothstep interpolant. But in the context of Dupire, isn't the continuity of the second derivative important as well? Why don't you use the quintic version (as suggested by Ken Perlin in the computer graphics community)? Are you only bumping in time (and constant across all strikes)?

Hi Jherek.

No worries, I will buy the Kindle version of your book as soon as it is available. I teach volatility at Copenhagen University and I am looking forward to reading it. I looked at the TOC and could see that many important themes for modern equity derivatives risk management are covered.

I travel a lot, and read a lot on flights, so I usually don't purchase hardcovers. Plus, I don't like waiting weeks for a hardcover to be delivered. I like to start reading once I decide to purchase a book. I believe this may be the case for many professionals, so I expect a Kindle version to hopefully raise the visibility of your book considerably.

I used smoothStep for a couple of reasons. As you know, we need a twice differentiable interpolant. That the second derivative is continuous is not necessary but results in continuous local vols, which may be desirable depending on context and usage.

1) SmoothStep is well known, documented on Wikipedia, simple, trivial to implement, fast to execute and twice differentiable as required, making it a natural choice for a book not primarily about volatility. A publication dedicated to volatility would be expected to explore alternative, more sophisticated schemes, and your suggestion of a higher order interpolant may indeed produce smoother, more stable results, although the superbuckets produced with smoothStep are of very decent quality. Note that the library makes it very easy to drop in your own interpolation scheme, just extend or replace the functions interp and interp2D in interp.h.

2) SmoothStep is a local interpolation scheme, in the sense that one interpolated value only depends on its two bracketing knots. For the purpose of risk sensitivities, a local interpolation scheme is preferable because it limits spilling: we want the 102.5 local vol to depend only on the 100 and 105 strikes, not 95 or 110. Higher order schemes are generally not local and may produce some undesirable spilling. For example, cubic splines in the context of interest rate curves are known to produce inaccurate risks for this reason and have been adjusted to behave in a more local manner.

I am not sure I understand your question "Are you only bumping in time (and constant across all strikes)?"
For the production of risks, I don't bump anything, this is the purpose of AAD and check-pointing, but I do compute the full superbucket, the partial derivatives to all the strikes and maturities in the risk view. For calibration, I bump call prices in expiry and strike to implement Dupire's formula.

Kind regards,

Antoine
 
Last edited:
I had a look at SmoothStep on Wiki but I am wondering if it has the properties of monotonicity, positivity and convexity needed for yield curves with sparse data, in particular Hyman filters, Hagan-West, Akima.
Cubic splines produce smooth and overshoot results in this case. If you have lots of evenly-spaced data then things are better.
 
Last edited:
Back
Top