• C++ Programming for Financial Engineering
    Highly recommended by thousands of MFE students. Covers essential C++ topics with applications to financial engineering. Learn more Join!
    Python for Finance with Intro to Data Science
    Gain practical understanding of Python to read, understand, and write professional Python code for your first day on the job. Learn more Join!
    An Intuition-Based Options Primer for FE
    Ideal for entry level positions interviews and graduate studies, specializing in options trading arbitrage and options valuation models. Learn more Join!

Algorithmic Trading Conference

Notes from the conference

Not knowing a lot about the people actually doing research in algo trading beforehand, it appears that a lot of big researchers in the field (who know each other, not surprisingly) presented here. I feel smarter just for having listened to some of the things said. I am quite eager to take a look at some of the papers discussed.

Here are some of my notes from the presentations. I welcome anyone to add their own observations on these presentations, and I will incorporate them into these descriptions.

Passive Orders and Natural Adverse Selection
George Sofianos, Goldman Sachs

This presentation was based on sell-side research that was just being presented for the first time. This was one of my favorite presentations, and I would love to take a look at the paper itself if anyone has a relationship with Goldman. (I also have the most notes because it was the beginning of the conference.)

The goal of the paper was to better refine the metrics of algorithmic trade performance. Specifically, most metrics do not account for the opportunity cost of incomplete order completion, and therefore give an under-estimate of their trading cost.

He motivates the paper defining natural adverse selection, which is the systematic cost from buy orders getting filled quicker when the price declines and slower when the price rises (and conversely for sell orders). These costs do not cancel out, and empirically, costs about 24 bps.

Overcoming this problem of the price "running away" by using limit orders results in a lower price (you end up paying only 3 bps -- the bid-ask spread?), but less of the order being filled; the remainder of the order pays a big penalty in either "clean up cost" of a market order, or opportunity cost for not owning those shares. This aggregate cost is the "all-in cost" Ultimately, a marketable limit order costs the same, with cleanup costs, as a market order.

Dynamic Optimization as a Foundation for Custom Execution Algorithms Merrell Hora, Credit Suisse

A prime question in (sell-side) algorithmic trading is "how much of an order do you execute now?" He describes what appears to be a standard procedure in the algo trading world (many of the presenters used the same equation) of modeling the decision process as wanting to minimize the weighted average of expected cost and variance of expected cost (wanting to pay less, but also never wanting to pay too much).

His algorithm uses Bayes theorem to deduce daily trading volume given a certain amount of volume [ P( DV | V_t ) ] from historically-estimated distributions of daily volume [ P(DV) ] and volume traded at a given time, given a daily volume [ P( V_t | DV ) ]. [It also incorporates the price in a way I didn't understand -- he then uses dynamic programming to optimize the online trading problem.]

As we see with later algorithms, his solution to this is an algorithm that likes to trade at the beginning of the day, and at the end of the day. The reason is that at the beginning of the day, they can get trades done at a price very close to the "arrival price" (the market price when an order is received) and it does trades at the end of the day because it runs out of time.

Cul de Sacs and Highways: An Optical Tour of Dark Pool Trading Performance Ian Domowitz, ITG

In this paper, Domowitz examines the quality of execution in different dark pools (originally called "crossing networks").

The first dark pool was POSIT from ITG, started in 1984. Recently, these have exploded in number as a result of "liquidity aggregators", which are algorithms that place orders in many dark pools and dynamically split your order based on where it is getting filled. These algorithms solve the chicken-and-egg problem that a dark pool needs many users to run efficiently, but users will not use the network if it is not running efficiently. Today there are 37 US dark pools, and 20 abroad. Some are geared towards institutional business, with minimum block size, and others take retail business as well. Some dark pools were also for internal use originally.

The key feature of dark pools, of course, is that no order information is revealed, which allows very large orders to be done without moving the market. Although they all use the same crossing algorithm to match buyers and sellers, many of the dark pools show wildly different trading costs and average block sizes. (In all the examples, of course, ITG's POSIT networks perform the best under the given metrics.) For block sizes, ECNs on average see 200 share blocks filled; many of the dark pools see 600 share blocks done; the POSIT networks see up to 47,000 share blocks filled.

The other feature of dark pools is their crossing frequency: continuously, or discretely. He points out that the latter type gives superior trading costs by a wide margin.

Measuring and Modeling Execution Cost and Risk Robert Engle, NYU

This one went a little fast, and I only caught onto what he was talking about in the context of later lectures. I also don't have many notes as a result. Luckily the paper should be out for reading.

The author uses a regression model to attribute trading costs to different factors, including urgency and size. Some literature on the subject,

Almgren & Chriss 1999, 2000
Almgren 2003
Grinold & Kahn 1999
Domowitz & Krowas (?) 04

Random Marix Theory & Covariance Estimatin Jim Gatheral, Merrill Lynch

It only dawned on me later that this paper was about using PCA to address problems of randomness giving false correlations between stocks. The tool he uses to clean the data is from random matrix theory: the Tracy-Widom and Marcenko-Pasteur densities model the frequency of eigenvalues in random matrices. An important feature of these the Marcenko-Pateur density is that it predicts zero eigenvalue density past a certain value if the underlying relationships are uncorrelated.

Therefore, given an empirical correlation matrix, we can graph the eigenvalues to find those that we expect to result from random noise, and those that are likely real effects. You can then discard those from random noise to observe the "real ones". Using this to remove false relationship reduces trading costs, volatility and drawdown in the example trading strategies.

Adaptive Arrival Price Rob Almgren, Quantitative Brokers

Revisiting the question of "should you trade slow or fast", the author gives another take on the question. Some literature:

Implementation Shortfall, Perold 1988
Grinold & Kahn 1996
Almgren & Chriss 2000
Almgren & Lorenz 2006
Shefrin & Statmar 1985

Execution Risk Rob Ferstenberg, Morgan Stanley

The author addresses the same problem as Rob Almgren. He observes that it is the trader's objective to minimize the function E[ trading costs ] + L V[TC] where L is an urgency or risk aversion parameter. It is the portfolio manager's objective to maximize E[ y_T ] - L V[ y_T ] where y_T is the portfolio value at the terminal time T.

The author combines these equations to show that the goals are equivalent. (Although it seems they assumed from the start that the L was the same.)

Does Algorithmic Trading Improve Liquidity Albert Menkveld, VU Institute

This author has a completely different question in mind, asked from the 10,000 foot level. He observes that since the advent of algorithmic trading, trading costs have decreased. The question is whether this is causal. He looks at a transition period in 2003 when the autoquote system was gradually being rolled out on NYSE to answer the question.

He found that although bid-ask spread decreased as a result, the market depth there also decreased. However, the net effect on trading costs is positive.
 
I don't have any copies; you'll have to google for them, although I can attach them as they are tracked down.
 
Jim Gatheral's talk sounds interesting. Do you have a copy of his paper?

Andy, I will ask Jim for the paper when I get back from vacation. He was going to do the same presentation at work the same day I left for vacation.

Doug, I will email you later about some of the papers. If not, I'll talk to you in class.
 
I agree with Doug's observations. It was a useful conference to attend even though it was related to equity entirely.

My top 3:
1. Execution Risk - presentation closest to real implementation and with plenty of details. He was the only presenter who revealed some of the market-making algorithms used at MS. I also appreciated that he has pointed out the theory behind their framework.

2. Dynamic Optimization as a Foundation for Custom Execution Algorithms - Hora is a major influence in the field. His set of algorithms for large block execution problem use a real-time scheduling approach. Dynammic programming is an intermediate tool to finding future states. I liked his theoretical approach to the problem. For this presentation, audience should have been more C.S. systems scheduling guys. Since this was not the case, the presentation was pretty opaque to almost everyone.

3. Cul de Sacs and Highways: An Optical Tour of Dark Pool Trading Performance - Review of dark pool overall data provided a few interesting conclusions. The author has worked in the field from the beggining at ITG, top provider for this service. It was easy to follow, unbiased approach. For instance you would expect that he would try to support continous dark-pool model. However, his data shows superiority of auction-based exchange using initial market price metric.

Perhaps just as interesting are the discussions that are spawned from these ideas. For example, coupling of algorithms that use same PCA approach to extract sector data. If algorithms replace humans and all of them have same view of the market, wouldn't they create the market?

Audience was mainly NYU students Math finance and MBA. There were few other universities represented: Baruch, Georgia Tech, Rutgers etc. Also there were some sponsors, though more on sales side.
 
Back
Top