Differential Machine Learning : novel ML algorithms with unreasonable effectiveness for pricing and risk approximations in finance

antoinesavine

New Member
I just published with my colleague Brian Huge the result of 6m+ research at Danske Bank on pricing and risk approximation by AI. We found that the combination of ML with automatic differentiation (AAD) makes a rather spectacular difference.
The working paper is available on arXiv, pending review by Risk Magazine. Feedback from the community is much appreciated.

working paper: [2005.02347] Differential Machine Learning
gitHub: differential-machine-learning - Overview
colab notebook: Google Colaboratory
blog: Differential Machine Learning


Antoine Savine
 

Daniel Duffy

C++ author, trainer
Antoine.
I reckon a lot of people here are interested in ML. The question is how they might get started.
 

Michsund

Active Member
C++ Student
Antoine.
I reckon a lot of people here are interested in ML. The question is how they might get started.
For me I would say it’s important first to start with having good grasp on linear algebra. Then going on and learning basic regression/ols. Then go learn about bias/variance trade off and understand basic models like laddo/ridge. Then move on from there. I think with ML it’s important I understand bias/variance trade off and what we’re trying to do
 

antoinesavine

New Member
Hi Daniel, I recommend Andrew Ng's lessons on Coursera for a start, he is an incredibly good teacher (makes me jealous :) Start with the Machine Learning course, and carry on with the Deep Learning specialization.
For the more theoretically inclined, Andrew Ng gives the same courses at Stanford, with more mathematical depth. His lecture notes are easily accessible on Stanford's web site.
For practical implementation, I like Aurelien Geron's book, ideally after following the Coursera material.
All of this should give students enough head start to read all kinds of papers and blogs and follow presentations and discussions on application in finance.
 

antoinesavine

New Member
The deepest bias-variance analysis I have come across is in Christopher Bishop's book, available online for free. But this is heavy material, not for the faint of heart, and this is all pre deep learning, although it remains eminently current.
 

quantsmodelsbottles

Active Member
Hi Daniel, I recommend Andrew Ng's lessons on Coursera for a start, he is an incredibly good teacher (makes me jealous :) Start with the Machine Learning course, and carry on with the Deep Learning specialization.
For the more theoretically inclined, Andrew Ng gives the same courses at Stanford, with more mathematical depth.
do not recommend the coursera course. read introduction to statistical learning (doesn’t have linear algebra unfortunately but it’ll help get started) or take andrew ng’s stanford course
 

Michsund

Active Member
C++ Student
do not recommend the coursera course. read introduction to statistical learning (doesn’t have linear algebra unfortunately but it’ll help get started) or take andrew ng’s stanford course
Agree ISL is a great book. Do element statistical learning after
 

Daniel Duffy

C++ author, trainer
I find Geron's book just OK. And boring.
It reads like a cook book pressing the right buttons for TensoFlow and Sklearn.
On a follow on, most of the O'Reilly books IMO are 90% tables, screen shots and plots and 10% maths/insights.
Each to his own; what do I know.

The constructive message is: opportunity for continuous kaizen.
 
Last edited:

Daniel Duffy

C++ author, trainer
yes. the grad version is very linear algebra heavy, so one may want to take a refresher course beforehand.
Linear algebra is needed in so far that it is (only) a supporting mechanism for approximation and optimisation algorithms.
Most problems here are nonlinear and are solved as iterations of linear problems (enter linear algebra!).

What is missing IMHO in general is more precision on

. function approximation (yuge area)... many MLers think the higher the degree of the polynomial the better the accuracy, and worser anecdotes :alien: Many ML have CS background (nothing wrong with that) but they miss out on the mathematical nuance. ex: Geron's 300-degree polynomial (page 123) is out of this world. (n) It's 1st year numerical analysis.


. Multivariate calculus and optimisation ((un)constrained)
. some n-dimensional geometry and Functional Analysis
. able to program simple models

I think in this case that getting the fundamentals right is important, especially all that approximation stuff.

Others will have views as well. No one has a monopoly on wisdom. :love:
 
Last edited:

Dean

New Member
Hi, I have been trolling around machine learning courses trying to isolate the best approach to learn the subject for applications to finance. Various FE degree programs follow different approaches. The text "Elements of Statistical Learning" with "Intro to statistical learning" seem popular for FE. This text is used for a graduate statistics course at Stanford (Stat 315A). The authors assume background in measure theoretic probability and theoretical statistics, although do not assume a computer science background.

On the other hand CMU has a PhD program is Machine Learning and they use Bishop, Murphy and Mitchell's texts (CS 10-715) with selected readings in Deep learning and recent articles and notes. Stanford CS does the same (CS 229). Andrew Ng references Bishop, not ESL.
There is an indication the ESL is frequentist and Bishop is Bayesian and economics tends toward following a Bayesian approach to modeling?

The CS approach assumes data structure and algorithm courses along with upper division probability and statistics (non-measure theoretic). I also assume C++, Python and a functional language. Once the intro ML course is completed, CMU and Stanford move to the theoretical aspects of machine learning (eg statistical learning cs 10-716). Based on this information ESL seems to be popular but a bit off the standard training practices in CS departments. Can someone clarify which approach is best for applications to finance?
 

Daniel Duffy

C++ author, trainer
The authors assume background in measure theoretic probability

This excludes > 90% of readers! To have a have a background in measure theoretic probability the necessary foundation are

real analysis
measure theory / Lebesgue integration
Some Functional Analysis, really

The beginner should not be discouraged if he finds he does not have the prerequisites for reading the prerequisites.

Paul Halmos
 

Dean

New Member
It is my understanding that statistical machine learning uses functional analysis and probability theory

A computer science approach into machine learning with solid undergrad courses in algorithms, probability and stats is an efficient way into the field for masters level with an applied focus, unless one is seeking a purely "black box" approach following a Python, tensor flow and/or R tutorial.
eg. Introduction to Algorithms Cormen… and Statistical Inference George Casella, Roger L. Berger
see CS 170

CMU is a leader in machine learning and seem to follow a balanced Bayesian and classical approach to statistics
All of Statistics: A Concise Course in Statistical Inference by Larry Wasserman
Probability and Statistics 4th Edition by Morris H. DeGroot, Mark J. Schervish

Stanford course requirements for 315a in statistical learning, as taught by the authors, requires graduate probability, statistics and regression theory, per their course catalog. On the other hand, Berkeley's intro to machine learning, taught in their computer science department uses ISL and ESL texts, but supplements it with a lot of course notes and articles, they also seems to present this material after a semester in AI.
 
Nice summary, fact-based survey for current stat learning/ML path.

I cannot agree "There is an indication the ESL is frequentist and Bishop is Bayesian and economics tends toward following a Bayesian approach to modeling?" ESL is both frequentist and Bayesian, furthermore, everything is Bayesian in today's statistic world.

IMHO, ESL is math/stat approach, Bishop and Murphy is computer science approach (maybe Bishop is a mix). I've encountered quite a lot of approximations when I do ML methods, many of them are spontaneous, random and lack of math/stat theory backup. However, they somehow come up with acceptable estimation results under certain situations. I think they are popular because they are trying to tackle some complicated models which traditional math/stat methods cannot handle, with sacrificing accuracy and reliability.

I'm still trying to figure out how to balance those two approaches. Be open to ML methods, keep an eye on those new development on ML, but always beware of their shortcomings. Personally, I feel more comfortable only using ML methods as first time screening tools.

Hi, I have been trolling around machine learning courses trying to isolate the best approach to learn the subject for applications to finance. Various FE degree programs follow different approaches. The text "Elements of Statistical Learning" with "Intro to statistical learning" seem popular for FE. This text is used for a graduate statistics course at Stanford (Stat 315A). The authors assume background in measure theoretic probability and theoretical statistics, although do not assume a computer science background.

On the other hand CMU has a PhD program is Machine Learning and they use Bishop, Murphy and Mitchell's texts (CS 10-715) with selected readings in Deep learning and recent articles and notes. Stanford CS does the same (CS 229). Andrew Ng references Bishop, not ESL.
There is an indication the ESL is frequentist and Bishop is Bayesian and economics tends toward following a Bayesian approach to modeling?

The CS approach assumes data structure and algorithm courses along with upper division probability and statistics (non-measure theoretic). I also assume C++, Python and a functional language. Once the intro ML course is completed, CMU and Stanford move to the theoretical aspects of machine learning (eg statistical learning cs 10-716). Based on this information ESL seems to be popular but a bit off the standard training practices in CS departments. Can someone clarify which approach is best for applications to finance?
 
Last edited:
There is another way to go, that is to fine-tune/adapt general ML methods to your own models. Methods could be regression, least-square or other common numerical/stat methods. The key is to adapt it and followed by good analysis.

btw: I've seen a couple of discussions here as to what is the purpose of learning R. To get a statistic idea across, R is very useful, as shown in ESL.
 
Last edited:

Zaurald

New Member
I just published with my colleague Brian Huge the result of 6m+ research at Danske Bank on pricing and risk approximation by AI. We found that the combination of ML with automatic differentiation (AAD) makes a rather spectacular difference.
The working paper is available on arXiv, pending review by Risk Magazine. Feedback from the community is much appreciated.

working paper: [2005.02347] Differential Machine Learning
gitHub: differential-machine-learning - Overview
colab notebook: Google Colaboratory
blog: Differential Machine Learning


Antoine Savine
This is interesting. You seem to have exploited the encoder-decoder architecture generally used in machine translation applications.

Quick question though - what if you remove the extra linear unit in the middle and stack the next hidden layers alongside (utilizing the same differential activation function)? I'm having difficulty in understanding why should one use a linear activation function here?

Forgive me, I'm just trying to understand the thought process behind choosing this architecture.
 
Top