• C++ Programming for Financial Engineering
    Highly recommended by thousands of MFE students. Covers essential C++ topics with applications to financial engineering. Learn more Join!
    Python for Finance with Intro to Data Science
    Gain practical understanding of Python to read, understand, and write professional Python code for your first day on the job. Learn more Join!
    An Intuition-Based Options Primer for FE
    Ideal for entry level positions interviews and graduate studies, specializing in options trading arbitrage and options valuation models. Learn more Join!

Black Scholes -- Computational Techniques

Joined
1/3/14
Messages
11
Points
13
There are several questions here.

So... I've heard that in solving the Black Scholes equation people tend to use a fairly simple trick to find it's solution... it's just a bare bones Finite Difference discretization.

( \frac{\partial V}{\partial t} + \frac{1}{2}\sigma ^2S^2\frac{ \partial ^2 V}{ \partial S^2} + rS \frac{\partial V}{\partial S} - rV = 0 )

So... we have a diffusion term, an advection term, and a price sink.

So using a first order time, first order backward space advection, first order centered space diffusion scheme. We have:

(V^{n+1}_{i} = V^{n}_{i}-\frac{1}{2}\sigma^2S^2\frac{dt}{ds^2}(V^{n}_{i+1}-2V^n_{i}+V^n_{i-1})-rS\frac{dt}{ds}(V^{n}_{i}-V^n_{i-1})+rV^n_i)

Fantastic. Now we figure out what we need to put in each of these constants. These constants are going to make up our CFL condition... there will be only two CFL conditions in this equation that we care about. We make sure that each of these CFL conditions are less than 1/2 (Because a high frequency trade machine backed by a PDE whose solution is in the process of blowing up is probably very bad... the clients wouldn't appreciate the fallout). --> What order of approximations are actually used out in your field? The simple critter that I have above cannot be allowed to run for too long because it'll lose stability. Also if you need to run the thing over a lot of stocks... each spatial term (all the S) can be handled by different processors for different pieces of the domain with a few ghost cells placed a the boundaries between the processor regions. Unfortunately... all of those processors have to finish before getting to the next timestep... right?

Now... with the advent of really fast GPUs. The traditional finite difference solution to the Black Schole's equation can probably be solved a lot faster using a bunch of really fast GPUs. One of the fastest ones that I've been aware of is the NVIDIA Titan GTX. One of these critters costs about $1000 apiece... and you get close to 6 TFLOPS of computational resources. (That's the theoretical performance, in reality it's actually probably closer to 2 TFLOPS). But still... 1000 of these suckers gets you into the Petaflops regime. Now... since GPUs tend to be better at embarrassingly parallel problems... the finite difference scheme would operate poorly when paired against a Monte Carlo method... because the MC is embarrassingly parallel and the finite difference method is not.

Some of you have actually built some of the systems used in Wall Street. Has this been done yet? Is a cluster of CPUs (like perhaps vector processors) actually better to use than a great big cluster of GPUs? What have your experiences with these systems been?

How many points are usually used in the spatial domain? How do you get that measure of volatility to put into the equation?
 
Last edited:
I've heard that in solving the Black Scholes equation people tend to use a fairly simple trick to find it's solution

On paper, it looks easy.You've done the very very easy part.

See www.wilmott.com Numerical thread for the eye-opener.


P.S. Your post is full of platitudes. It is not even wrong.
 
My first question, the paper answers:

Volatility is derived from market data.

Is the dimensionality equal to the number of correlated options in the portfolio? What increases dimensionality?

(L_o) stable schemes for discontinuous data... interesting, I never had to deal with anything like that when I was working with the heat equation before.

With all of the paper's emphasis on implicit schemes... you seem to want your solutions to be unconditionally stable... even if it takes slightly longer to solve. Alternating Direction Implicit is a really good scheme; would Alternating Direction Explicit be appropriate... it is also unconditionally stable, but it's a lot easier to get it to run in parallel efficiently. The only problem with ADI is that it is a pain in the neck to code... it takes longer to code than most other numerical schemes.

It's nice that a lot of these schemes discussed in the paper tend to produce tri-diagonal matrices... that brings the computational complexity down to O(N).

Section 7.2 reminds me a lot of the Maxwell relations in statistical mechanics.

Wow... the fully explicit scheme didn't do very well did it? -2.7x10^170. :) There's an unhappy customer.
-- Although, to me, that begs the question. A forward space convection treatment could cause it to blow up like that. A backward space treatment might have kept that roughly stable (I think)... at least, that's what would have happened with the 1D Burger's equation with diffusion.

KT4 was the most accurate. But Yanenko's scheme was pretty accurate and really quick. (page 124).
Crank Nicolson looked the best of the time marching schemes.

His computer was a lot like the one I am typing from right now.

For his performance times c++ would've been a lot faster than matlab. But getting those graphs out would have been a lot more of a hassle.

It looks like Sheppard entered his bibliography by hand. Latex can be wonky about how it treats it's bibliographies; I can sympathize.

For a discourse on numerical schemes suited for solving Black Scholes, it looks like it was really well done. (Assessed by someone who hasn't yet solved the Black Scholes himself)
 
Last edited:
Sheppard's thesis is probably the best thesis on FDM that I have seen. Please take your time as it contains stuff that is original, brave and robust.

Some corrections

ADI is not good for his problem. Hu uses Soviet Splitting, not the same.
Crank Nicolson is not good for his problem. CN is OK, except at K, where it a load of junk.

He uses Extrapolated Euler.
 
I had seen the von Neumann stability analysis before. But not the others that he had used.

Has he found a place in the industry?
 
I had seen the von Neumann stability analysis before. But not the others that he had used.

Has he found a place in the industry?

1. You have a *lot* of reading to do. VN is cute, but only for simple initial value problems with constant coeffiicients.
2. Absolutely.
 
So, you've had a great deal of experience with FDM, FEM; have you ever worked with Smoothed Particle Hydrodynamics (SPH)?
 
Last edited:
So, you've had a great deal of experience with FDM, FEM; have you ever worked with Smoothed Particle Hydrodynamics (SPH)?

So, you've had a great deal of experience with FDM, FEM; have you ever worked with Smoothed Particle Hydrodynamics (SPH)?


Wel, since 1973. :eek:

Is SPH a kind of meshless method? I dabbled a little with meshless but stopped after a bit. There are a few issues with, not the least of which is that it leads to a full, ill-conditioned matrix system at each time level.

Meshles is not so popular in QF. FDM (and to a lesser extent FEM).
 
That makes sense. The better applications for the meshless based methods are where accuracy is not quite as important, like graphics. (meshed based methods, in general, seem to produce more accurate results)
 
Back
Top