• C++ Programming for Financial Engineering
    Highly recommended by thousands of MFE students. Covers essential C++ topics with applications to financial engineering. Learn more Join!
    Python for Finance with Intro to Data Science
    Gain practical understanding of Python to read, understand, and write professional Python code for your first day on the job. Learn more Join!
    An Intuition-Based Options Primer for FE
    Ideal for entry level positions interviews and graduate studies, specializing in options trading arbitrage and options valuation models. Learn more Join!

The Future of Quant Jobs

*Shrug* -- probably from Alpha Go and AlphaZero.

I am not familiar with Go (not my cup of tea to be honest).

Gary Marcus, a psychologist at New York University, has cautioned that for all we know, AlphaGo may contain "implicit knowledge that the programmers have about how to construct machines to play problems like Go" and will need to be tested in other domains before being sure that its base architecture is effective at much more than playing Go. In contrast, DeepMind is "confident that this approach is generalisable to a large number of domains".[9]

This is the part I would be careful about. Programmers tend to start with the solution in mind?
 
In contrast, DeepMind is "confident that this approach is generalisable to a large number of domains".[9]

This is the part I would be careful about. Programmers tend to start with the solution in mind?

They succeeded with chess. But to be honest, both games are clearly structured with clear and unambiguous rules. I doubt they will have much or any success with unstructured situations. The hype and overblown expectations for "AI" have always there, over the decades. My own two cents is that quant work that can be structured will be automated -- though this is no profound insight.

"AI" so far always boils down to brute force computation. The chess engines are calculating tens and hundreds of thousands of positions a second. This, along with a relatively primitive evaluation function, has allowed them to beat top human players, who calculate relatively little but have a more honed and subtle positional evaluation and instinct. Apparently the thing about AlphaZero is that the number of variations it calculates is two orders of magnitude less than the leading chess engines (Stockfish, Komodo, Houdini) but still plays at least as strongly as they do. Though it still calculates orders of magnitude more than a human player. More than this I can't say since the developing team isn't revealing any details nor is the program+hardware being released for general consumption (where it might be reverse engineered).
 
just remember that whenever somebody shows talks about machine learning or AI or whatever these new buzz terms are, they are talking about statistics. that is all it is. applying statistics and mathematics to solve a problem used to be called applied statistics or applied mathematics, but for some reason it is now called machine learning. it is silly. it is like in physics when somebody mentions 'quantum mechanics' - rarely are they talking about a specific feature of quantum mechanics as opposed to classical mechanics or just physics in general.

I have to disagree with you. First Quants do use black scholes. I am not talking about the formula itself but most of these pricing models still go with the “black scholes framework”. Most of the time they use “calibration” technique to reverse engineer the vol and fit the market price. Essentially they are over fitting the pricing models by adding more parameters. Try testing them out of the sample and see what happened. Sure you can argue that caliberation cannot be used for prediction as only reflected current market view and market changes. To me this is just “blah blah blah”.

As in terms of machine learning, it is not just statistics but a robust ecosystem to apply existing AI toolkits to yield the optimization of a practical problem. deep learning are essentially layers of logistic sigmoids (for example) which can train a non-linear function, which essentially those pricing functions are, no matter it is Q or P. Well an argument is that computer is just binary mathematics. But it is very powerful with algorithms. I would say AI today is algorithms wrapping statistics.

I agreed nowadays AI is an over-abused terms and 90% (probably 99%) of existing AI projects are dogshit. But there are people who are applying it very well, like google brain, Amazon, etc. In my perspective it is inevitable that AI will one day take over. It is not happening yet because banks are too cheap ( and stupid) to fund these projects.
 
I have to disagree with you. First Quants do use black scholes. I am not talking about the formula itself but most of these pricing models still go with the “black scholes framework”. Most of the time they use “calibration” technique to reverse engineer the vol and fit the market price. Essentially they are over fitting the pricing models by adding more parameters. Try testing them out of the sample and see what happened. Sure you can argue that caliberation cannot be used for prediction as only reflected current market view and market changes. To me this is just “blah blah blah”.

As in terms of machine learning, it is not just statistics but a robust ecosystem to apply existing AI toolkits to yield the optimization of a practical problem. deep learning are essentially layers of logistic sigmoids (for example) which can train a non-linear function, which essentially those pricing functions are, no matter it is Q or P. Well an argument is that computer is just binary mathematics. But it is very powerful with algorithms. I would say AI today is algorithms wrapping statistics.

I agreed nowadays AI is an over-abused terms and 90% (probably 99%) of existing AI projects are dogshit. But there are people who are applying it very well, like google brain, Amazon, etc. In my perspective it is inevitable that AI will one day take over. It is not happening yet because banks are too cheap ( and stupid) to fund these projects.

i highlighted the relevant part in bold. my view and most practitioners' view is that the calibration cannot be used for prediction and that all of these pricing models are used merely to reflect current market views and to generate robust hedges - usually first order greeks and perhaps a couple of second order hedges (gamma). i am happy to provide references from this - not from academia but people who actually work in banks.

believing that a stochastic differential equation for an asset can actually 'predict' option prices or moves for that asset is... ridiculous. people stopped taking this approach seriously a long time ago. it is not what quants do. if the pricing model function is your base, then this is how you will think. if the market is your base, then the pricing model solely exists to match the market and to generate hedges. i am in the latter category.

this is a fundamental difference in probability to statistics. crudely speaking, your perspective is that of a statistician. but this is not a problem where regression, cross-validation, over-fitting, etc, are helpful and/or relevant. there is nothing to predict. nearly everything is encoded onto the market already and we just need to make sure E[f(S)] behaves suitably, so we use probability theory... girsanov theorem, monte carlo methods, optimal stopping time, etc.
 
Last edited:
1)Is it likely that the job of a quant could be completely(or partially) performed by an Artificial Intelligence system in the near future(the next 5-30 years)?
2)Would the happening of the event described in 1) lead to banks and other financial institutions replacing 'human-quants' with 'machine-quants'?
3)What skill could an aspiring quant learn now in order to guard against such an event described in 3) ?
4)Given the possible events above, is it risky to position oneself (in terms of extra reading, university courses etc) to become a quantitative analyst?
5) What additional tasks might a quant of the future(e.g in 20 years time) have to be able to perform in order be superior to a machine-quant?
6)What additional abilities might a quant of the future have to possess in order to be superior to a machine quant?

Many Thanks,
-Porimasu

1, 2) it already take place, though the cause is more prosaic than AI: after the crisis the market shrink down and products got simpler => less need for quants.
On the other hand the regulation grew but "more regulation" de facto means not a deeper insightful analysis of risks but more and more reporting
Finding the Human Factor in Bank Risk
And reporting can be (and is being) automatized.

3) Market practical experience, knowledge of regulatory frameworks and ability to maintain "AI" systems :)

4) Of course it is, but the problem is more general: the future gets more and more vague and one need to be more and more flexible in order to survive.
In principle, there are no more safe job anyway.
As far as you are in the university you need to work hard. On one hand you have to learn practical stuff in order to get a job (even as a junior, see e.g. my personal story at slide 7 http://quantlib.org/slides/qlum17/nekrasov.pdf ).
On the other hand you need to learn the fundamentals: you will have no more opportunities to learn them outside of university but they are [sometimes] helpful by learning new applied stuff.
So I am quite happy that I dwelt in functional analysis and measure theory during my university time.

5) 20 year time? Ask Kurzweil, really :)
Or as a quant, try to forecast a Wiener process for 20 years, given unclear estimation of \mu and a pretty large \sigma :)

6) flexibility, willingness to learn and change, as well as humor and Lebensfreude
 
i highlighted the relevant part in bold. my view and most practitioners' view is that the calibration cannot be used for prediction and that all of these pricing models are used merely to reflect current market views and to generate robust hedges - usually first order greeks and perhaps a couple of second order hedges (gamma). i am happy to provide references from this - not from academia but people who actually work in banks.

believing that a stochastic differential equation for an asset can actually 'predict' option prices or moves for that asset is... ridiculous. people stopped taking this approach seriously a long time ago. it is not what quants do. if the pricing model function is your base, then this is how you will think. if the market is your base, then the pricing model solely exists to match the market and to generate hedges. i am in the latter category.

this is a fundamental difference in probability to statistics. crudely speaking, your perspective is that of a statistician. but this is not a problem where regression, cross-validation, over-fitting, etc, are helpful and/or relevant. there is nothing to predict. nearly everything is encoded onto the market already and we just need to make sure E[f(S)] behaves suitably, so we use probability theory... girsanov theorem, monte carlo methods, optimal stopping time, etc.

In a market where these assets are traded with lots of liquidity and your client comes to you for taking a position, do you give them the market price or your risk neutral price? Yes you can argue that pricing models are still useful for revaluation for hedging purposes. As a matter of fact, I have been seeing how traders are bleeding out on a daily basis as the model prices “converge” into marke prices (of course when near maturity it is just linear). The most money they make is the first day when they collect the premiums and hope they can cover their hedging cost but lmao good luck with that.

As you claim about the risk neutral pricing, how Quants shifted from physical measure into risk neutral with Girsanov theorem/replicating portfolio/martingale pricing etc... The theory is indeed beautiful. But reality seems to disagree. All the undelying assumptions are violated. Even equity itself doesn’t seems to be lognormal distributed.

You are restricting yourself into a framework where you can always perfectly explain yourself. Have you ever considered that the framework itself is a false system?
 
In a market where these assets are traded with lots of liquidity and your client comes to you for taking a position, do you give them the market price or your risk neutral price? Yes you can argue that pricing models are still useful for revaluation for hedging purposes. As a matter of fact, I have been seeing how traders are bleeding out on a daily basis as the model prices “converge” into marke prices (of course when near maturity it is just linear). The most money they make is the first day when they collect the premiums and hope they can cover their hedging cost but lmao good luck with that.

As you claim about the risk neutral pricing, how Quants shifted from physical measure into risk neutral with Girsanov theorem/replicating portfolio/martingale pricing etc... The theory is indeed beautiful. But reality seems to disagree. All the undelying assumptions are violated. Even equity itself doesn’t seems to be lognormal distributed.

You are restricting yourself into a framework where you can always perfectly explain yourself. Have you ever considered that the framework itself is a false system?

im not sure we are on the same page. you talk about market price and risk neutral price but to me they are the same thing and by that i mean they are both the market price. if an option has a quoted vol of 50% and a price of $10,000, then the risk neutral price will be $10,000. you say "still" for hedging purposes... i would say their only use is for hedging purposes, nothing else, certainly not as a prediction tool.

we have to be careful about what is a tool and what is an assumption... girsanov theorem is a tool, it cannot be violated. now, martingale pricing is an assumption but we accept that assets are martingales because if the asset is not a martingale, you cannot produce robust hedges. generally speaking, if your asset is not a semi-martingale, then the risk neutral pricing theory collapses and you may as well put in the garbage bin - i am happy to provide references to show some bizarre, counter-intuitive stuff in option pricing for non semi-martingale assets. that is why we assume the underlying asset is a martingale under the risk neutral measure. the assumption of log-normality is material but that is why we have the SABR model, Heston model, local stochastic volatility models... i do agree that most of the underlying assumptions are violated, but non of them are violated enough to render the usual tools of risk neutral pricing theory redundant, in my view, anyway.

but as i explained before, we do not care about the underlying properties of market data, i.e., lognormal assumption for an equity. we care about the underlying properties of the hedges, usually vanilla derivatives, to exotic derivatives. for example, you can use a Normal distribution for an equity (Bachelor model) - a blind idiot would say but the equity cannot be negative???? this model is garbage - to generate a Bachelor model implied vol that will match the market price. i do not usually care about the market data, that is one of the reasons why risk neutral pricing theory works.

i am not perfectly explaining myself. the market moves however it wants - option prices and generally all derivatives & market data are a function of supply and demand, economic factors, etc... i do not attempt to explain it, all i do is match in such a way that ensures there is no arbitrage on the quotes i put to the market. in my view, reality is truth and fictions/dreams are left to academics, not practitioners.
 
Back
Top