• C++ Programming for Financial Engineering
    Highly recommended by thousands of MFE students. Covers essential C++ topics with applications to financial engineering. Learn more Join!
    Python for Finance with Intro to Data Science
    Gain practical understanding of Python to read, understand, and write professional Python code for your first day on the job. Learn more Join!
    An Intuition-Based Options Primer for FE
    Ideal for entry level positions interviews and graduate studies, specializing in options trading arbitrage and options valuation models. Learn more Join!

"Fooled by Randomness" and Quant Finance

Jim

Joined
4/10/11
Messages
47
Points
18
Simply put:

1. Extensive use of quantitative data --> Very conducive to random luck.
2. Markets move because of choices made by people --> Markets are not random processes --> Pure quant models = Ineffective approach to trading.

-Does this mean that the systematic processing of qualitative data will be the driving force behind finance models of the future? Some of this is already being done, although at a very shallow level (i.e. sentiment analysis, financial information extraction, etc.). Thoughts?
 
As much as stochastic calculus would leave you to believe, markets are clearly not random processes. I've also wondered about the role of human psychology, particularly in high frequency trading. My belief being that, since the trading models are engineered by people, they must have absorbed some human human psychology along the way. I've talked to some leading high frequency traders and they've downplayed the role of this, although they are more of technologists. It would be interesting to see more opinions from different perspectives.
 
"One of the most common reactions to our early research was surprise and disbelief. Indeed when we first presented our rejection of the Random Walk Hypothesis at an academic conference in 1986, our discussant - a distinguished economist and senior member of the profession - asserted with great confidence that we had made a programming error, for if our results were correct, this would imply tremendous profit opportunities in the stock market."
-Andy Lo, Craig McKinlay "A Non Random Walk Down Wall Street"

"A man and his economist friend walk down the street and see a $100 bill. The man reaches down to pick up the bill, but is interrupted by the economist: Don't bother... if it were a real $100 bill, somebody would have picked it up already."
- from the same book
 
As much as stochastic calculus would leave you to believe, markets are clearly not random processes. I've also wondered about the role of human psychology, particularly in high frequency trading. My belief being that, since the trading models are engineered by people, they must have absorbed some human human psychology along the way. I've talked to some leading high frequency traders and they've downplayed the role of this, although they are more of technologists. It would be interesting to see more opinions from different perspectives.

I am thoroughly convinced that there is at least some element of human psychology in those models. Not doing so might mean poor risk management - but I admit that this is just my personal bias.

I've heard of the $100 bill parable. If we assume that the only way for people to know about the $100 bill is through qualitative data (i.e. news sources, annual reports), then pure quantitative models that rely on the notion that markets are random processes would be an utter failure. The models would not only cause traders to miss the opportunity to gain the $100, but also expose the traders to more risk - the $100 may affect some of the positions held by the institution. The interplay of Natural Language Processing and Time Series Analysis, as well as other text processing techniques appears to be one of the few solutions to this problem.
 
I am thoroughly convinced that there is at least some element of human psychology in those models. Not doing so might mean poor risk management - but I admit that this is just my personal bias.

I've heard of the $100 bill parable. If we assume that the only way for people to know about the $100 bill is through qualitative data (i.e. news sources, annual reports), then pure quantitative models that rely on the notion that markets are random processes would be an utter failure. The models would not only cause traders to miss the opportunity to gain the $100, but also expose the traders to more risk - the $100 may affect some of the positions held by the institution. The interplay of Natural Language Processing and Time Series Analysis, as well as other text processing techniques appears to be one of the few solutions to this problem.

First you must remember that the markets are the people, the number represents EVERYTHING put together.
That being said, trying to include ALL the inputs of some stock is IMPOSSIBLE and VERY INEFFICIENT.

One of my professors once told me a story that NASA had some new program that had thousands of input variables and was supposed to be the best thing since silicon implants :P Well, it was so complicated that they never managed to get it to work and get results as good as much simpler models.
They brought in an expert that reduced the complexity ten folds and by approximations simplified the model and it only needed a couple of dozens of variables, only then the program worked properly and showed what it's worth.

Getting back to finance, I don't know if you're familiar with communications theory but many times even though you have and can use more empirical data it is simpler and on average gives a better return to just model the noise as a random variable.
Same thing in finance, I guess that many models have some slowly moving baseline and on it they insert the faster "noise", trying to model it too detailed will only backfire in terms of computational effort and time of response.
It's better to have 60% success with a HUGE volume than having 80% on a much SMALLER one.
 
We went from a gigantic, building-sized, punch card-based, computer at MIT in the mid 60s, to less than half the size of a paper clip today. Computing power virtually doubles every year. So in about fifteen years, computing power would be 30,000 times greater than it is today. Since computers will be able to process things thousands of times quicker in the future, we would be able to create models that are many times more complex than the ones today.

If we could have computers to understand natural language, it would be able to process all the qualitative data relevant to a particular stock - and react to any news that disrupts the status quo as indicated by the qualitative data. How do we know what constitutes status quo/balance, and how much of an effect would newly generated information have on the price of the stock (as well as its related stocks for hedging)? Machine learning/statistics would be able to address that - at a much more advanced level than what is possible right now. Current qualitative models are very underdeveloped.

*I don't see why it would be impossible to create a model that accounts for statistically significant text data in an efficient manner.

P.S. Thanks for your input. And I do understand the merits of simplicity - after all, someone's gonna end up using the thing. But I am actually looking forward to a future where financial models require little to no human inputs.
 
Its funny I had a similar discussion with someone from NYU (a non hard sciences major). Her statement was that the quant models are wrong beacause they are modeled by people in the "hard sciences". My response was - you have to use some math to perform quantitative analysis. I asked her if she had any suggestions for alternate areas that she thought could model it, but could not get a reply.

So, if random walk is not accurate (may not be the best), can anyone come up with better models based on something else. My guess is yes, but how long will it take?
 
Its funny I had a similar discussion with someone from NYU (a non hard sciences major). Her statement was that the quant models are wrong beacause they are modeled by people in the "hard sciences". My response was - you have to use some math to perform quantitative analysis. I asked her if she had any suggestions for alternate areas that she thought could model it, but could not get a reply.

So, if random walk is not accurate (may not be the best), can anyone come up with better models based on something else. My guess is yes, but how long will it take?

I'd like to offer a preemptive apology for simply stating the obvious on my original post: That quant models are inefficient and incomplete.I would never say quant models are wrong - we already know that - they're all demonstrably wrong. They're simply metaphors that attempt to capture the more significant attributes of a phenomenon. But I am inclined to believe that financial models would drastically improve once we incorporate a whole new avenue of data.

As to how long it would take to achieve such a level of sophistication... I'd say it'd require at least a decade? But seeing how many of the "quant hopefuls" in high school still have a good ten to fifteen years under their belt until they enter the workforce, I thought that this would be an interesting discussion.
 
Not all quant models are based purely on Time Series data.

The ones you see in books are usually applicable primarily for option valuation (the continuous time SDEs).
 
Back
Top