- Joined
- 1/3/14
- Messages
- 11
- Points
- 13
There are several questions here.
So... I've heard that in solving the Black Scholes equation people tend to use a fairly simple trick to find it's solution... it's just a bare bones Finite Difference discretization.
( \frac{\partial V}{\partial t} + \frac{1}{2}\sigma ^2S^2\frac{ \partial ^2 V}{ \partial S^2} + rS \frac{\partial V}{\partial S} - rV = 0 )
So... we have a diffusion term, an advection term, and a price sink.
So using a first order time, first order backward space advection, first order centered space diffusion scheme. We have:
(V^{n+1}_{i} = V^{n}_{i}-\frac{1}{2}\sigma^2S^2\frac{dt}{ds^2}(V^{n}_{i+1}-2V^n_{i}+V^n_{i-1})-rS\frac{dt}{ds}(V^{n}_{i}-V^n_{i-1})+rV^n_i)
Fantastic. Now we figure out what we need to put in each of these constants. These constants are going to make up our CFL condition... there will be only two CFL conditions in this equation that we care about. We make sure that each of these CFL conditions are less than 1/2 (Because a high frequency trade machine backed by a PDE whose solution is in the process of blowing up is probably very bad... the clients wouldn't appreciate the fallout). --> What order of approximations are actually used out in your field? The simple critter that I have above cannot be allowed to run for too long because it'll lose stability. Also if you need to run the thing over a lot of stocks... each spatial term (all the S) can be handled by different processors for different pieces of the domain with a few ghost cells placed a the boundaries between the processor regions. Unfortunately... all of those processors have to finish before getting to the next timestep... right?
Now... with the advent of really fast GPUs. The traditional finite difference solution to the Black Schole's equation can probably be solved a lot faster using a bunch of really fast GPUs. One of the fastest ones that I've been aware of is the NVIDIA Titan GTX. One of these critters costs about $1000 apiece... and you get close to 6 TFLOPS of computational resources. (That's the theoretical performance, in reality it's actually probably closer to 2 TFLOPS). But still... 1000 of these suckers gets you into the Petaflops regime. Now... since GPUs tend to be better at embarrassingly parallel problems... the finite difference scheme would operate poorly when paired against a Monte Carlo method... because the MC is embarrassingly parallel and the finite difference method is not.
Some of you have actually built some of the systems used in Wall Street. Has this been done yet? Is a cluster of CPUs (like perhaps vector processors) actually better to use than a great big cluster of GPUs? What have your experiences with these systems been?
How many points are usually used in the spatial domain? How do you get that measure of volatility to put into the equation?
So... I've heard that in solving the Black Scholes equation people tend to use a fairly simple trick to find it's solution... it's just a bare bones Finite Difference discretization.
( \frac{\partial V}{\partial t} + \frac{1}{2}\sigma ^2S^2\frac{ \partial ^2 V}{ \partial S^2} + rS \frac{\partial V}{\partial S} - rV = 0 )
So... we have a diffusion term, an advection term, and a price sink.
So using a first order time, first order backward space advection, first order centered space diffusion scheme. We have:
(V^{n+1}_{i} = V^{n}_{i}-\frac{1}{2}\sigma^2S^2\frac{dt}{ds^2}(V^{n}_{i+1}-2V^n_{i}+V^n_{i-1})-rS\frac{dt}{ds}(V^{n}_{i}-V^n_{i-1})+rV^n_i)
Fantastic. Now we figure out what we need to put in each of these constants. These constants are going to make up our CFL condition... there will be only two CFL conditions in this equation that we care about. We make sure that each of these CFL conditions are less than 1/2 (Because a high frequency trade machine backed by a PDE whose solution is in the process of blowing up is probably very bad... the clients wouldn't appreciate the fallout). --> What order of approximations are actually used out in your field? The simple critter that I have above cannot be allowed to run for too long because it'll lose stability. Also if you need to run the thing over a lot of stocks... each spatial term (all the S) can be handled by different processors for different pieces of the domain with a few ghost cells placed a the boundaries between the processor regions. Unfortunately... all of those processors have to finish before getting to the next timestep... right?
Now... with the advent of really fast GPUs. The traditional finite difference solution to the Black Schole's equation can probably be solved a lot faster using a bunch of really fast GPUs. One of the fastest ones that I've been aware of is the NVIDIA Titan GTX. One of these critters costs about $1000 apiece... and you get close to 6 TFLOPS of computational resources. (That's the theoretical performance, in reality it's actually probably closer to 2 TFLOPS). But still... 1000 of these suckers gets you into the Petaflops regime. Now... since GPUs tend to be better at embarrassingly parallel problems... the finite difference scheme would operate poorly when paired against a Monte Carlo method... because the MC is embarrassingly parallel and the finite difference method is not.
Some of you have actually built some of the systems used in Wall Street. Has this been done yet? Is a cluster of CPUs (like perhaps vector processors) actually better to use than a great big cluster of GPUs? What have your experiences with these systems been?
How many points are usually used in the spatial domain? How do you get that measure of volatility to put into the equation?
Last edited: