• C++ Programming for Financial Engineering
    Highly recommended by thousands of MFE students. Covers essential C++ topics with applications to financial engineering. Learn more Join!
    Python for Finance with Intro to Data Science
    Gain practical understanding of Python to read, understand, and write professional Python code for your first day on the job. Learn more Join!
    An Intuition-Based Options Primer for FE
    Ideal for entry level positions interviews and graduate studies, specializing in options trading arbitrage and options valuation models. Learn more Join!

Blog: Articles on C++11 and Computational Finance (by Daniel J. Duffy)

C++:
// An example of a Bridge pattern.
//
// 2022-10-16 DJD #7
// The Bridge structural pattern models abstractions (defined in class hierarchies,
// for example). Each class can have several (volatile) implementations which are also
// usually defined in a class hierarchy). The two class hierarchies should be able to
// vary independently of each other.
// The Bridge pattern is well-known and probably one of the most difficult patterns to
// program in C++98. In the following, we propose a solution using a number of features
// in C++20. Some requirements are:
//
//    1. Efficiency: avoiding subtype polymorphism, i.e. no virtual functions.
//  2, Clean separation of state/data and behaviour between abstractions and implementations
//  (good portability)
//  3. Good maintenance a) no monolithic classes, b) reduce proliferation of classes.
//
// To achieve this end, we use the following:
//
//    1. Use of templates and parametric polymorphism (Chaos to Classes 1996, page 31).
//    2. No subtype inheritance, heap objects in classes.
//    3. Static inheritance using CRTP Pattern.
//    4. Template template parameter (to give extra level of indirection).
//    5. Composition rather than inheritance. "Horizontal" delegation.
//    6. Embedded Template Method pattern.
//    7. C++ Concepts to define protocols and provides-requires interfaces.
//    8. Variadic parameters (especially for Strategy).
//  9. The code contains design principles that can be applied to other patterns that
//     use a combination of inheritance and composition, for example Strategy, Template Method,
//     Visitor, State, Adapter, Decorator(?), Proxy.
//  10. Integration with FP, e.g. std::function<>
//
// Design/Architecture rationale. Four conceptual levels (from blue sky to hardware):
//
// 1. Protocols and interface definitions using C++ Concepts (approach is based on what is
// essentially Architectural Description Language).
// 2. Top-level abstract/device-independent class. It cannot be instantiated as such becase it
// uses CRTP and we must provide a class as a template parameter. The class is called AbstractSensor.
// We can directly inherit from this class (e.g. hard-wired SimpleSensor class ) without introducing
// the full machinery of the Bridge pattern).
// 3. We can now create device-independent derived classes from AbstractSensor. These are domain
// classes with device-independent properties (e.g. AbstractSimulatedSensor) and that delegate to
// hardware classes).
// 4. Hardware/simulated software classes (e.g. FanucSensor, PhilipsSensor)  with device-dependent
// properties and that communicate with external systems via drivers.
//
        


#include <iostream>
#include <string>

// Protocols for sensor implementations, using template template parameter

// An abstract method
template <template <typename T> class Imp, typename T>
    concept IStart = requires (Imp<T> d)
{
    d.start();
};

// An abstract method
template <template <typename T> class Imp, typename T>
    concept IProcess = requires (Imp<T> d)
{
        d.process();
};

// An abstract method
template <template <typename T> class Imp, typename T>
    concept IStop = requires (Imp<T> d)
{
        d.stop();
};


// Essentially, an interface := set of abstract methods
template <template <typename T> class Imp, typename T>
    concept ISensorProtocol = IStart<Imp, T> && IProcess<Imp, T> && IStop<Imp, T>;

//

// Hierarchy of "abstract sensors" classes
// Each class can have an implementation
// Bridge pattern: compare it to GUI widget hierarchy in Windows, Linux, DOS, Autocad

template <typename NumericData, typename Derived>
    class AbstractSensor
{ // Using CRTP, Template Template Parameter and Template Method Pattern
  // This is the compile-time CRTP version of an ABC class with abstract methods.

    private:
        // Minimal data
    
    public:
        AbstractSensor()  {}

        void start()
        {
            std::cout << "top level start \n";
            static_cast<Derived*>(this)->start();
        }

        void process()
        { // Implements Template Method Pattern

            // do 1...
            
            std::cout << "top level process\n";
            static_cast<Derived*>(this)->process();

            // do 3...
        }

        void stop()
        {
            std::cout << "top level stop \n";
            static_cast<Derived*>(this)->stop();
        }

        void reset()
        { // A kind of template method pattern

            std::cout << "*** Resetting ...";
            stop();
            start();
        }

};

template <typename NumericData>
    class SimpleSensor : public AbstractSensor<NumericData, SimpleSensor<NumericData>>
{ // Using CRTP, Template Template Parameter and Template Method Pattern
    private:
        // Minimal data

    public:
        SimpleSensor() {}

        void start()
        {
            std::cout << "simple start \n";
        
        }

        void process()
        { // Implements Template Method Pattern

            // do 1...

            std::cout << "simple process\n";
            
            // do 3...
        }

        void stop()
        {
            std::cout << "simple stop \n";
        
        }

};

template <template <typename T> class Imp,typename T>
                requires ISensorProtocol<Imp, T>
class AbstractSimulatedSensor :
            public AbstractSensor<T, AbstractSimulatedSensor<Imp, T>>
        
{
    private:
        // Device-independent attributes
        double _delay;
        std::string _name;
        
        Imp<T>& _impl;

    public:
        AbstractSimulatedSensor(Imp<T>& imp)
                : _impl(imp), _delay(0.01), _name(std::string("simulated"))
        {}

        AbstractSimulatedSensor(Imp<T>& imp, double delay, std::string& name)
            : _impl(imp), _delay(delay), _name(name)
        {}

        void start()
        {
        
            std::cout << _name + " slow start\n";
            _impl.start();
        }

        void process()
        { // Implements Template Method Pattern

        
            std::cout << "delay " << _delay << '\'n';
            std::cout << " can't you see I'm working on it\n";
            _impl.process();
        }

        void stop()
        {
               std::cout << _name + " slow stop\n";
            _impl.stop();
        }

        void Switch (Imp<T>&  newImpl)
        {
            _impl = newImpl;

        }

        double delay() const
        {
            return _delay;
        }

        double name() const
        {
            return _name;
        }

};
 
TEST STUFF

C++:
template <typename T>
    class PhilipsSensor
{
    private:
       

    public:
        PhilipsSensor() {}

        void start()
        {
            std::cout << " Philips start\n";
   
        }

        void process()
        {
            std::cout << " Philips process\n";
           
        }

        void stop()
        {
            std::cout << " Philips stop\n";
        }

    };

    template <typename T>
        class FanucSensor
    {
    private:


    public:
        FanucSensor() {}

        void start()
        {
            std::cout << " FANUC start\n";

        }
       
        void process()
        {
            std::cout << " FANUC process\n";

        }

        void stop()
        {
            std::cout << " FANUC stop\n";
        }

    };




    int main()
    {

        SimpleSensor<float> sims;
        sims.start();
        sims.process();
        sims.stop();
        sims.reset();


        using T = double;
        T delay = 0.01;
        std::string fanucStr("fanuc");
        std::string philipsStr("philips");

        FanucSensor<T> fSensor;
        PhilipsSensor<T> pSensor;
        AbstractSimulatedSensor<FanucSensor, T> sensor(fSensor, delay, fanucStr);
        AbstractSimulatedSensor<PhilipsSensor, T> sensor2(pSensor, delay, philipsStr);

        // Direct to the simulated sensor
        //sensor.Switch(pSensor); NOPE, imp is compile-time
        sensor.start();
        sensor.process();
        sensor.stop();
        sensor.reset();

        sensor2.start();
        sensor2.process();
        sensor2.stop();

    }
 

A Short History of Computational Finance 1990-2020, a Partial Differential (PDE/FDM) Approach​


DANIEL J. DUFFY'S HISTORICAL REVIEW FOCUSES ON THE APPLICABILITY OF PARTIAL DIFFERENTIAL EQUATION (PDE) TECHNIQUES AND RELATED NUMERICAL METHODS, PARTICULARLY FINITE DIFFERENCE METHOD (FDM) THAT ARE APPLIED TO OPTION PRICING AND HEDGING APPLICATIONS.

This article traces the development of computational finance during the period 1990–2020. The views and conclusions are based mainly on the author’s involvement in this area. We focus on the mathematical and numerical models that form a crucial part of this field. Special attention areas are option pricing, partial and stochastic differential equation, and their numerical approximation. We take what we could call a lifecycle approach by tracing the evolution of computational finance, from problem description to design and implementation. The presentation is semi-technical, and it should appeal to a wide range of readers.


 

A Short History of Computational Finance 1990-2020, a Partial Differential (PDE/FDM) Approach​


DANIEL J. DUFFY'S HISTORICAL REVIEW FOCUSES ON THE APPLICABILITY OF PARTIAL DIFFERENTIAL EQUATION (PDE) TECHNIQUES AND RELATED NUMERICAL METHODS, PARTICULARLY FINITE DIFFERENCE METHOD (FDM) THAT ARE APPLIED TO OPTION PRICING AND HEDGING APPLICATIONS.

This article traces the development of computational finance during the period 1990–2020. The views and conclusions are based mainly on the author’s involvement in this area. We focus on the mathematical and numerical models that form a crucial part of this field. Special attention areas are option pricing, partial and stochastic differential equation, and their numerical approximation. We take what we could call a lifecycle approach by tracing the evolution of computational finance, from problem description to design and implementation. The presentation is semi-technical, and it should appeal to a wide range of readers.




The article is concise in its review of the history of computational finance, a PDE/FDM approach. It points out the limitations of using a heuristic (trial and error) approach to problem-solving, especially PDEs that model derivatives. It also introduces the readers to more recent numerical methods for computing option sensitivities/greeks, showing the importance of numerical methods and their application. Thanks for sharing.

Questions:
  1. In your opinion, what alternative method (scheme) can handle discontinuous initial conditions (option payoff) since it affects the accuracy of FDM approaches?
  2. As an expert, I would like your opinion on where we draw the line between a Quant acquiring top-notch programming skills and a full-blown software engineer.
 
The article is concise in its review of the history of computational finance, a PDE/FDM approach. It points out the limitations of using a heuristic (trial and error) approach to problem-solving, especially PDEs that model derivatives. It also introduces the readers to more recent numerical methods for computing option sensitivities/greeks, showing the importance of numerical methods and their application. Thanks for sharing.

Questions:
  1. In your opinion, what alternative method (scheme) can handle discontinuous initial conditions (option payoff) since it affects the accuracy of FDM approaches?
  2. As an expert, I would like your opinion on where we draw the line between a Quant acquiring top-notch programming skills and a full-blown software engineer.
1. I have resolved the flaws in heuristic methods for Black as discussed in the article and book.
2. Can you be more precise? BTW "software engineer" is a meaningless term. Some don't even (know how to) programmer.
Programmer is nicer.

A bit dated, but humorous

 
1. I have resolved the flaws in heuristic methods for Black as discussed in the article and book.

Excellent, Dr.Duffy. However, Black's assumption makes your modification of the heuristic method restrictive or limited.

Counterexample: Considering that Black assumes that the option holder cannot exercise the option before the expiration date, which would make the heuristic method fail for an American option.

2. Can you be more precise? BTW "software engineer" is a meaningless term. Some don't even (know how to) programmer.
Programmer is nicer.

A bit dated, but humorous


Merriam-Webster has defined Software Engineering has "a branch of computer science that deals with the design, implementation, and maintenance of complex computer programs." So, a software engineer who cannot program should be the exception, not the norm. Since you say the term "programmer" is better, I'd use the word "programmer."

Just curious, what level of expertise in programming would you consider sufficient for a Quant?
 
Nice thing about definitions is so many to choose from.

Just curious, what level of expertise in programming would you consider sufficient for a Quant?
Too ambiguous.

Can you take a concrete case.

e.g, create a C++ class library for interest rates.
Write a blockchain.
 
Last edited:
Nice thing about definitions is so many to choose from.

Just curious, what level of expertise in programming would you consider sufficient for a Quant?
Too ambiguous.

Can you take a concrete case.

e.g, create a C++ class library for interest rates.
Write a blockchain.

That's true about definitions. Taking your example of creating a C++ class library for interest rates, I think an excellent knowledge of data structures and algorithms should suffice. In an ever-evolving world of technology, I was just wondering what the bounds were for quants who want to acquire excellent programming skills.
 
That's true about definitions. Taking your example of creating a C++ class library for interest rates, I think an excellent knowledge of data structures and algorithms should suffice. In an ever-evolving world of technology, I was just wondering what the bounds were for quants who want to acquire excellent programming skills.
Not sufficient and possibly not necessary: "data structures and algorithms" can mean anything.

Here is a test: reproduce the results in this article in C++. So, design and implement a solution.


Knowledge of the bounds

IR model
PDE/FDM methods
numerical skills
prorgramming skills
testing skills.

This is what a quant should know IMO.
 
Last edited:
Excellent, Dr.Duffy. However, Black's assumption makes your modification of the heuristic method restrictive or limited.

Counterexample: Considering that Black assumes that the option holder cannot exercise the option before the expiration date, which would make the heuristic method fail for an American option.



Replace my informal text by


. I have resolved the flaws in heuristic methods for 1-factor and 2-factor PDEs for equity, IR, hybrid models, including early exercise as discussed in the article and book.

European option ==> PDE
American option ==> free boundary value problem.

Solved. Just buy the book where everything has been made precise.
 
Last edited:
Doing maths in C++20


C++:
// Test001.cpp
/*

(C) Datasim Education BV 2022
2022-10-20 DJD #8

Super-strippped down sample code to model all kinds of mathematical functions that are neeeded
in numerical applications. Here we reduce the scope by introducing scalar-valued
functions of a scalar argument.

Rationale/preview:

    1. Introducing some C++ features to support functional programming style.
    2. (mathematically-defined) Vector spaces of scalar, vector and vector-valued functions.
    3. Applications of Vector spaces in numerical computing (optimisation, PDE, ML etc. etc.)
        (e.g. build complex functions from simpler functions)
    4. Heterogeneous container, functions and algorithms (Boost C++ Fusian and Hana).
    5. Integration with Design Patterns, e.g. delegation (stateful/stateless versions).

    Ideally, this should really in some new Boost C++ library.

** No futher comments for the moment. Later variadic arguments.

*/
#include <functional>
#include <cmath>
#include <iostream>
#include <string>

#include <boost/math/tools/minima.hpp>

template <typename T>
    using FType = std::function<T (T)>;

template <typename T>
    FType<T> operator + (const FType<T>& f, const FType<T>& g)
{ // Addition

    return [=](T x)
    {
        return  f(x) + g(x);
    };
}

template <typename T>
    FType<T> operator + (const FType<T>& f, T a)
{ // Scalar addition

        return [=](T x)
        {
            return  f(x) + a;
        };
}

template <typename T>
    FType<T> operator * (T a, const FType<T>& f)
{ // Scalar multiplication

        return [=](T x)
        {
            return  a*f(x);
        };
}


template <typename T>
    FType<T> operator - (const FType<T>& f, FType<T>& g)
{ // Subtraction

        return [=](T x)
        {
            return  f(x) - g(x);
        };
}

template <typename T>
    FType<T> operator - (const FType<T>& f)
{ // Unary negation

        return [=](T x)
        {
            return  -f(x);
        };
}

template <typename T>
    FType<T> operator + (const FType<T>& f)
    { // Unary plus

        return [=](T x)
        {
            return  +f(x);
        };
    }

template <typename T>
    FType<T> operator << (FType<T>& f, FType<T>& g)
{ // Composition of functions

        return [=](T x)
        {
            return  f(g(x));
        };
}


template <typename T>
    FType<T> assign(T constant)
{ // Create a function from a scalar constant

        return [=](T x)
        {
            return  constant;
        };
}


template <typename T>
    FType<T> exp(const FType<T>& f)
{ // exp

        return [=](T x)
        {
            return  std::exp(f(x));
        };
}

template <typename T>
    FType<T> Exp(const FType<T>& f)
{ // exp

        return [=](T x)
        {
            return  std::exp(f(x));
        };
}

template <typename T>
    FType<T> log(const FType<T>& f)
{ // log

        return [=](T x)
        {
            return  std::log(f(x));
        };
}

template <typename T>
    FType<T> pow(const FType<T>& f, int n)
    { // powers of a function f(x)*...*f(x) (n times)

        return [=](T x)
        {
            return  std::pow(f(x), n);
        };
    }
//////////////////////////////////////////////////////////////////////////////////////////////////
//
// Variadic arguments
//

template <typename RT, typename ...T>
    using FTypeV = std::function<RT(T... args)>;

template <typename RT, typename ...T>
    FTypeV<RT, T...> operator + (const FTypeV<RT, T...>& f, const FTypeV<RT, T...>& g)
{ // Addition

        return [=](T... x)
        {
            return  f(x...) + g(x...);
        };
}

// User functions
double G(double x)
{
    return x+1;
}

double H(double x, double y)
{
    return x + y;
}

double AckleyGlobalFunction(double x)
{
    double a = 20.0; double b = 0.2; double c = 2.0*3.14159;


    return -a*std::exp(-b*x) - std::exp(std::cos(c*x)) + a + std::exp(1.0);
}

double AckleyGlobalFunctionII(double x)
{ // Building the functions in a HOF assembly process

    double a = 20.0; double b = 0.2; double c = 2.0*3.14159;

    // Bit unwieldy; used to show function vector spaces.
    FType<double> f11 = [&](double x) {return -b*x; };
    FType<double> f1 = -a*exp(f11);

    FType<double> f22 = [&](double x) {return std::cos(c*x); };
    FType<double> f2 = -exp(f22);
//    FType<double> f2A = f2 << f1 << f11;
    //FType<double> f3 = assign(a + std::exp(1.0));
    FType<double> f2B = [&](double x) {return std::exp(1.0) ; };
    FType<double> f3 = f2B + a;
    FType<double> Ackley = f1 + f2 + f3;
   
    return Ackley(x);
}

double TestFunction(double x)
{ // To show how function algebra works


    // Bit unwieldy; used to show function vector spaces.
    FType<double> f1 = [](double x) {return x*x; };
    FType<double> f2 = [&](double x) {return f1(x)*f1(x); };
    FType<double> f3 = [&](double x) {return f2(x)*f2(x); };

    FType<double> sumFunc = f1 + f2 + f3;

    return sumFunc(x);
}


double RastriginGlobalFunction(double x)
{
    double A = 10.0;

    return A + x*x - A*std::cos(2.0*3.14159);

}

double SphereGlobalFunction(double x)
{
    return x*x;
}

// Testing a function in terms of components
double SphereGlobalFunction(double x, double y, double z)
{
    return x * x + y * y + z * z;
}

double SphereX(double x, double y, double z)
{
    return x * x;
}

double SphereY(double x, double y, double z)
{
    return y * y;
}

double SphereZ(double x, double y, double z)
{
    return z * z;
}


int main()
{
    FType<double> e = [](double x) { return 5.0; };
    FType<double> f = [](double x) { return G(x); };
    FType<double> g = [](double x) { return x*x; };

    // Currying, I -> H(double x, double y)
    FType<double> bindFun = std::bind(H, 1.0, std::placeholders::_1);
    auto func1 = g + bindFun;
    std::cout << "13, func1? " << func1(3.0) << '\n';

    // Currying, II
    double y = 1.0;
    FType<double> capturdFun = [&](double x) { return H(x, y); };
    auto func2 = g + capturdFun;
    std::cout << "13, func2? " << func2(3.0) << '\n';

    auto h1 = f + g;
    FType<double> h2 = f + exp(e);
    FType<double> h3 = g + f;

    // Experimenting
    FType<double> h4 = exp(exp(f));
    FType<double> h5 = exp(exp(f) + g);
    FType<double> h6 = log(exp(h5) + h5);
    FType<double> h7 = h6 + 1.0;

    double x = 5.0;
    std::cout << "1 " << h1(x) << "\n";
    std::cout << "2 " << h2(x) << "\n";
    std::cout << "3 " << h3(x) << "\n";

    x = 0.0;
    std::cout << "4 " << h4(x) << "\n";
    std::cout << "5 " << h5(x) << "\n";
    std::cout << "6 " << h6(x) << "\n";
    std::cout << "7 " << h7(x) << "\n";

    { // Powers

        FType<double> f = [](double x) { return x*x; };
        int n = 4;
        auto prodFun = pow(f, n);

        double x = 2.0;
        std::cout << "Power:  " << prodFun(x) << "\n";
    }

    { // Unary minus and plus

        auto f = assign(5.0);
        auto f1 = exp(f);
        auto f2 = +exp(f);
        auto f3 = -exp(f);
        auto f4 = -(-exp(f));
        auto f5 = +(-exp(f));
        auto f6 = -(+exp(f));

        double x = 5.0;
        std::cout << "exp stuff:  " << f1(x) << ", " << f2(x) << ", "
                    << f3(x) << ", " << f4(x) << ", " << f5(x) << ", " << f6(x) << '\n';
    }

   

    FType<double> f11 = [&](double x) {return -b*x; };
    FType<double> f1 = -a*exp(f11);

    FType<double> f22 = [&](double x) {return std::cos(c*x); };
    FType<double> f2 = -exp(f22);
    FType<double> f2AB = f2 << f1;
    FType<double> f2A = f2AB << f11;
    FType<double> f3 = assign(a + std::exp(1.0));
    FType<double> Ackley = f1 + f2 + f3;
    FType<double> Ackley2 = [](double x) { return AckleyGlobalFunction(x); };

    x = -1.0;
    while (x < 1.0)
    { // Sanity clause: compute the value in different ways

        std::cout << "Error " << x << " " <<
            AckleyGlobalFunction(x) - AckleyGlobalFunctionII(x) << " ";
        std::cout << "Value " << AckleyGlobalFunction(x) << ","
            << AckleyGlobalFunctionII(x) << ", " << Ackley2(x) << '\n';
        x += 0.1;
    }

    // C. Rastrigin
    double A = 10.0;
    FType<double> fr1 = assign(A);
    FType<double> fr2 = [&](double x) {return x*x; };
    FType<double> fr3 = [&](double x) {return std::cos(2.0*3.14159*x); };
    FType<double> fr4 = -A*fr3;
    FType<double> func = fr1 + fr2 + fr4;
    x = -1.0;
    while (x < 1.0)
    {
        std::cout << x << ", " << RastriginGlobalFunction(x) << ", " << func(x) << '\n';
        x += 0.1;
    }

   
    {    //      RT      x       y        z
        FTypeV<double, double, double, double> f1 = SphereX;
        FTypeV<double, double, double, double> f2 = SphereY;
        FTypeV<double, double, double, double> f3 = SphereZ;
        FTypeV<double, double, double, double> fSum = f1 + f2 + f3;
        std::cout << "Sphere " << fSum(2, 2, 2) << '\n'; // 12

    }
}
 

Attachments

  • Test001.cpp
    8.2 KB · Views: 2
Merriam-Webster has defined Software Engineering has "a branch of computer science that deals with the design, implementation, and maintenance of complex computer programs." So, a software engineer who cannot program should be the exception, not the norm. Since you say the term "programmer" is better, I'd use the word "programmer."

This definition is not even wrong. In OOP jargon, Software Engineer is an Abstract (Base) Class. Derived classes correspond to specific roles.
C++:
class SE
{
    abstract method doit();
}
 
Last edited:
theses on ML etc.


 
Numerical Solution of Partial Integro Differential Equations (PIDE)

PIDE are common in option pricing and hedging problems with jumps, for example in energy markets. Monte Carlo methods can be used to simulate jump diffusion processes but using them for American options and to compute option sensitivities is not a trivial task. And they tend to be slow.

It is possible to apply the Finite Difference Method (FDM) directly to the PIDE and the article below (vintage 2006) discusses a number of accepted schemes at the time of writing. Some observations are:



. Not all schemes are unconditionally stable.

. They are often first-order accurate in time.

. Some schemes were used that are inappropriate and/or over-engineered for the problem, leading to workarounds.

. PIDE methods for three dynamic state variables (Carr and Wu (2004)) are far from trivial.

. Too many heuristics.



An interesting project could be applying the modern methods to PIDE in my 2022 book “Numerical Methods in Computational Finance” (as I did in the PDE case).




In particular:

. Marchuk, Strang, Yanenko methods

. Alternating Direction Explicit (ADE) method (second order accurate, EXPLICIT, stable and can be used for three dynamic state variables problems). ADE is super-fast.

. Domain transformation instead of heuristic domain truncation tricks.

. Use Continuous Sensitivity Equation (CSE) to compute option sensitivities.

. Exponential fitting to avoid first-order accurate upwinding/downwinding in drift (convection) term.

. Test cases in finance and energy.

. The mathematical foundations for PIDE.

. A good object-oriented C++ (of course!) project with focus on accuracy, efficiency and reusability (using Design Patterns). For example, the MSc thesis of Shengxu Huang (University of Birmingham (2014)) uses PIDE as a composition of PDE and IE terms. The various finite and infinite jump types (e.g. Merton, Kou, VG, NIG) would be Strategy patterns.



This could be the makings (in my opinion) of a nice MSc thesis, at least.
 

Attachments

  • PIDE.pdf
    684.3 KB · Views: 2
  • PIDE_01.pdf
    128 KB · Views: 2
Some Remarks on Solving PIDEs
-------------------------------------

The Finite Difference Method (FDM) is a popular numerical method to approximate the solution of PIDEs, for both the differential and integral terms. In principle, this approach leads to some less than desirable consequences:

. FDM is good at approximating derivatives; it is less suitable for approximating integral terms, in particular integrals that are defined on infinite or semi-infinite intervals.

. FDM for PIDE leads to dense matrix systems in contrast to tridiagonal matrix systems which are common for PDE. Solving dense matrix systems can be computationally expensive. This problem can be addressed by inefficient explicit finite difference schemes or non-deterministic matrix iterative solvers.

. Traditional FDM for PIDE does not scale well to multi-factor PDE with multiple integrals.

The Finite Element Method (FEM) ) has been successfully applied to integral equations (The Galerkin method for the numerical solution of Fredholm integral equations of the second kind, Y. Ikebe SIAM Review Vol. 14 No. 3 July 1972) where it has been applied to engineering problems, for example heat radiation applications. This approach has been applied to PIDE models in finance:

<QUOTE> from 2005

I just finished a project on this topic and it is part of my dissertation. I also use a operator splitting scheme.But I use FD on the diffusion part and FE on the jump part. And it is tested on PIDEs with up to three space dimensions for European options.


Yes, for FEM I use Galerkin with hat functions. The reason I use Soviet Splitting is in general I can not find a simple scheme to solve my 3-d PIDE. It is disastrous to use FEM only(you know it by your own experience). My guess is that the diffusion part in high dimension is a convetion dominated convection-diffusion equation, it is extremely difficult to use FEM(in my own experience, I do not mean anybody else can not do it quite easily). And I dislike FDM to handle my jump part. Then came up my idea and all in a sudden bang goes the dynamite! The equation has become so simple to solve. Good question on the boundary condition. You need to adjust the boundary condition for each splitted step. At this moment I use the estimated ones and the results are not so bad. More work need to be done on this point, though. My thesis should be available soon.X.

hi, I've done a project recently on this topic. One and two factor European option and American option can be done by numerically solve PIDE by FEM. I do not why you use Soviet splitting to implement FEM approach? Such Soviet splitting are always used in finite difference scheme. If you solve it by FEM. You still have a variational formulation consisting of one for gaussian term, another for jump term(it will be a doulbe integral), but they are coupled together. And one more question is that how do you use the Landau transformation to convert a linear PIVI to a nonlinear PIDE on a fixed domain? Could you pls provide more about why you solve your Leg1 by Nystrom's method?

<UNQUOTE>
 
Back
Top