• C++ Programming for Financial Engineering
    Highly recommended by thousands of MFE students. Covers essential C++ topics with applications to financial engineering. Learn more Join!
    Python for Finance with Intro to Data Science
    Gain practical understanding of Python to read, understand, and write professional Python code for your first day on the job. Learn more Join!
    An Intuition-Based Options Primer for FE
    Ideal for entry level positions interviews and graduate studies, specializing in options trading arbitrage and options valuation models. Learn more Join!

But Where Are The Bayesians?

Joined
3/18/18
Messages
5
Points
13
Question

Given the usefulness of Bayesian Inference in dealing with real-world uncertainty, why don't we see more of it in the MFE literature/coursework?

For example, the books on FEpress don't really mention it. I also don't see anything on the Baruch MFE curriculum website.

Background

To me, financial engineering theory involves laying down a core set of mathematical assumptions about something and then carrying the logic forward to a closed form solution or good numerical approximation. Practitioners breathe life into the equations by making markets based on theory blended with practice.

Practitioners sometimes run into trouble when the "real world" gets in the way of theory - the 10,000 year flood happens every decade or so. Models with a limited historical look-back get overly excited by "good" data and practitioners lever up imperfect assumptions into disastrous consequences. The fall of LTCM is a great example of this.

Even before learning about situations where assumptions hit a brick wall in markets (Oct 1987, Aug 2007, etc.) - the frequentist view of the world I was presented with in undergraduate studies never quite worked for me. I eventually stumbled upon Bayesian Inference by way of Statistical Rethinking and it totally changed my approach towards modeling.


My Definition of Bayesian Inference

A meaningful subset of the following ideas:
1) Hypotheses are not models
2) Strict hypothesis falsification is not possible in most situations
3) Uncertainty should be aggregated and reported rather than ignored/averaged out
4) Your past beliefs and biases should be explicitly quantified in your priors
5) Models should be updated with new data using Bayes' Theorem
6) Asymptotic analysis doesn't work well in the real world
7) Nothing is "random", distributions capture our uncertainty under some assumptions
8) Parameters/models have distributions, data (usually) do not
9) Multilevel models should be used more often (a.k.a Mixed effects models)
10) Regularization is extremely important
11) Every model should deliver a Posterior Predictive Distribution

(Given a small sample of data, where tuning via cross validation is difficult)
12) A pair of competing models should be compared using the Bayes factor
13) Multiple models should be compared using an information criterion
 
This makes me think of Nate Silvers book, The Signal and the Noise, which is pro Bayesian camp and also a great read.

What a coincidence, I'm actually reading it right now.

Up to the part where he wrote about folding difficult to quantify fundamental data into a quantitative baseball system. In practice one could implement that with more strongly regularizing priors, but he hasn't mentioned that idea explicitly yet.
 
Some references: Bayesian methods in finance (by Rachev et al), MCMC (by Gamerman and Lopes).
 
bayesian methods in finance are garbage, they are even worse than frequentist methods. at least frequentist methods are simple to use (namely, calibration) and are easy to understand. when you price derivatives, aka you are using the martingale measure, you are working in a world where there is a clear objective set of steps to finding drift and the diffusion coefficient --> it is not an 'estimation' problem, so no statistical tools are required. when you risk manage, aka you are using the physical measure, you must calculate at least the drift coefficient, but you do it with simple tools (borrowed from the frequentist school), as you need to run many simulations... think of quantitative finance as a field that should use as little as statistics as possible, and when it does, to be kept as simple as possible.

another answer: not a single bank uses bayesian methods for pricing derivatives or risk management.

another answer: not a single company (bank, hedge fund, etc...) believes an asset's value can be forecasted by modelling an stochastic differential equation with drift/diffusion coefficients to be found.

you ask why it's not seen, the answer is that it's not used.

and if i remember my academia days... bayesian statistics is a gimmick. it has many problems of its own. why make things more complicated? keep it simple
 
This makes me think of Nate Silvers book, The Signal and the Noise, which is pro Bayesian camp and also a great read.
worth noting that Nate Silvers is a fraud, his work is complete garbage. people who know nothing about statistics see him as a God. those who know a bit will ask you for his track record: look into how disastrously wrong he got the Trump election. i knew people using simple methods, just basic statistical techniques on publicly available polls, who had more accurate models than the type that Silvers used.
 
and another post, because this thread is full of people who are putting forward stupid ideas - the fall of LTCM had nothing to do with the quality of their models. it is true that their VaR model failed in a rather spectacular fashion, but that was not the reason why they went down. it was all about liquidity and reputations - firms would not trade with them in any circumstance and firms started betting on the opposite direction when LTCM's positions were revealed. how is a specular VaR model going to save you there? it's not. you could argue if they had paid attention to their VaR model earlier then it would have saved them.... traders and portfolio managers will laugh at you if you actually argue that, so i will not waste my time even discussing it.

by the way the 10,000 year thing makes no sense if you are sensible about risk management. use a Levy process (something with heavy tails) to estimate how much capital you should hold, and you will experience a 'tail' event every year or so. the 10,000 year event would go into a 6 month event, which is probably more realistic. because you have been taught to believe in the Gaussian fraud distribution (ironic, given that Gauss developed it as a distribution for errors to a phenomena, not for the phenomena itself!), the 10,000 year event is a reference. only for the Gaussian. not for heavy tail distributions.

edit: i have never, never, never ever seen a practitioner "run into trouble when real world data gets in the way of theory". that is the precise opposite of what i have experienced on both buy side and sell side. often practitioners know nothing about the theory, they don't even know about textbooks springer finance, or arXiv or SSRN... all they know about is the data, some simple statistical techniques... maybe a bit of linear algebra (pca)... that's about it. practitioners know the data, academics know the theory. didn't you ask people who work in finance about their views on this? it's utter garbage.
 
Last edited:
Back
Top