Where is the bottle-neck in trading?

Joy Pathak

Swaptionz
Joined
8/20/09
Messages
1,328
Points
73
I went to a conference a few days ago whose main focus was Algo Trading and Execution. From what majority of the presenters and people I spoke to I gathered that the main bottleneck is in technology and not in the math/strategy. Everyone seems to be using the same strategy and the same algorithms and it is purely about speed at this point. I went to the conference last year and everyone was talking about Micro-seconds. This year they were talking about nano-seconds like it was trivial and pushing towards pico.

Is technology really the bottleneck? Most of them made a very good case for it. They are trying to optimize and reduce latency from optimizing, hardware, software. There was one firm presenting a special wire that can reduce latency compared to the competition.

I guess it depends on what kind of trading too, but let's talk about the more HF algo trading.
 
Actually I think the answer is no and I'll say why:

Mathematics has never been a bottleneck since it is same for every financial institution and none can have "secret" mathematical methods bitting a competitor. Nor technology varies among institutions since all of them have access to high-tech perfectly executing the codes written by human. But those codes themselves are to be pushed towards bettering.

We have the following situation now: All of the financial institutions have stuff aware of mathematics and programming. HFT itself is all about being before competitor. Let's take an "arbitrage hunting" when the programs are executing tens of thousands of transactions in nanoseconds as you mentioned. Those programs are running on similar platforms (why should we expect Citigroup to have better technology than JP Morgan or Goldman?! or vise) and where the only real competition everything boils down to is in CODES.

So as I conclude the reason is not in technology or math, but in intellectual improvement. Computer scientists should strive to make their versions of algos better than others.
 
I would have also though that proximity to the exchange for example is important. Co-located machines are going to have, even over optic fiber lines an advantage over those 10 miles away.
 
From what majority of the presenters and people I spoke to I gathered that the main bottleneck is in technology and not in the math/strategy. Everyone seems to be using the same strategy and the same algorithms and it is purely about speed at this point ...This year they were talking about nano-seconds like it was trivial and pushing towards pico

Maybe I misunderstand something here :) Could you point to the papers/slides if possible.

So far these words look quite odd to me:
- Everyone uses the same algorithms and there is nothing new to propose != bottle-neck.
- Everyone is able to process data 10^6 times faster than they were few years ago and they see how to improve it = bottle-neck.
 
Maybe I misunderstand something here :) Could you point to the papers/slides if possible.

So far these words look quite odd to me:
- Everyone uses the same algorithms and there is nothing new to propose != bottle-neck.
- Everyone is able to process data 10^6 times faster than they were few years ago and they see how to improve it = bottle-neck.

Actually the idea what I said can be considered similar. We should seek reason of bottleneck (whatever this is) in specifics which make competitors different and such specific is the intellectual property not the machines, algo similarity,etc.
 
Actually I think the answer is no and I'll say why:

Mathematics has never been a bottleneck since it is same for every financial institution and none can have "secret" mathematical methods bitting a competitor. Nor technology varies among institutions since all of them have access to high-tech perfectly executing the codes written by human. But those codes themselves are to be pushed towards bettering.

We have the following situation now: All of the financial institutions have stuff aware of mathematics and programming. HFT itself is all about being before competitor. Let's take an "arbitrage hunting" when the programs are executing tens of thousands of transactions in nanoseconds as you mentioned. Those programs are running on similar platforms (why should we expect Citigroup to have better technology than JP Morgan or Goldman?! or vise) and where the only real competition everything boils down to is in CODES.

So as I conclude the reason is not in technology or math, but in intellectual improvement. Computer scientists should strive to make their versions of algos better than others.

No, it's the technology. Code "ninjas" that will make your stuff run as fast as anyone else's are a dime or more a dozen, it's not exactly a forbidden or secret art. From talking to actual HFT guys, what Joy says is exactly accurate. Everyone can make their code as efficient as everyone else's, everyone trades in the HFT space based off of the same theses. Everyone is colocated (no longer an advantage but a prerequisite), everyone has a GPU farm. The startup costs into the HFT space have skyrocketed.

My take from interviewing for HFT positions and talking to people who actually do HFT: it's a technology arms race.

I've talked to at least one trader who moved from one of the top 5 algo trading firms in size to a smaller competitor because the smaller company had more aggressive policy of investment in technology.
 
No, it's the technology. Code "ninjas" that will make your stuff run as fast as anyone else's are a dime or more a dozen, it's not exactly a forbidden or secret art. From talking to actual HFT guys, what Joy says is exactly accurate. Everyone can make their code as efficient as everyone else's, everyone trades in the HFT space based off of the same theses. Everyone is colocated (no longer an advantage but a prerequisite), everyone has a GPU farm. The startup costs into the HFT space have skyrocketed.

My take from interviewing for HFT positions and talking to people who actually do HFT: it's a technology arms race.

I've talked to at least one trader who moved from one of the top 5 algo trading firms in size to a smaller competitor because the smaller company had more aggressive policy of investment in technology.

Interesting point. But as you say:
Everyone can make their code as efficient as everyone else's, everyone trades in the HFT space based off of the same theses

In the same vein, cannot everyone have the same machines to run their codes on? Is it difficult for large companies to obtain at least as advanced technology as competitors have?!
 
Sure, they can afford them. The question is do they get them? That is a question of cost and benefit.
 
Alexei - So is it possible there could be a situation where newer start-ups are frozen out of the co-location game, simply because there is either no space for the machines (although probably not a likely scenario) or simply because the location prices out all but those with big bucks - or worst just shuts its door to new custom?
 
I don't know the possibilities, since I don't know how much server space there is left in secaucus (or chicago or wherever the other exchanges keep their servers). I do know the following: everyone is colocated now. It's not just reserved for the big boys, but it is expensive. Basic capitalism tells me that if there is more demand, price will increase accordingly. Did I mention that the start up costs of an HFT fund are really big, as far as technology goes?

What I've seen happen is start ups rent technology from established companies in exchange for a sizeable chunk of their profits. But I don't know under what scenario you would be given that consideration - you have to produce solid p&l to make it worth your (and their) while and possibly know someone inside said bigger firm if I had to take a guess.
 
Everyone seems to be using the same strategy and the same algorithms and it is purely about speed at this point

What strategies/algorithms do everyone use?
 
Maybe I misunderstand something here :) Could you point to the papers/slides if possible.

So far these words look quite odd to me:
- Everyone uses the same algorithms and there is nothing new to propose != bottle-neck.
- Everyone is able to process data 10^6 times faster than they were few years ago and they see how to improve it = bottle-neck.

I don't have any papers. Most of what I mentioned was from presentations at the seminars. I don't have it the powerpoint slides.
 
Actually I think the answer is no and I'll say why:

Mathematics has never been a bottleneck since it is same for every financial institution and none can have "secret" mathematical methods bitting a competitor. Nor technology varies among institutions since all of them have access to high-tech perfectly executing the codes written by human. But those codes themselves are to be pushed towards bettering.

We have the following situation now: All of the financial institutions have stuff aware of mathematics and programming. HFT itself is all about being before competitor. Let's take an "arbitrage hunting" when the programs are executing tens of thousands of transactions in nanoseconds as you mentioned. Those programs are running on similar platforms (why should we expect Citigroup to have better technology than JP Morgan or Goldman?! or vise) and where the only real competition everything boils down to is in CODES.

So as I conclude the reason is not in technology or math, but in intellectual improvement. Computer scientists should strive to make their versions of algos better than others.

No. The code that the firms run is already optimized to the max. I don't think there is more optimization left. It is all about technology now from the likes of it. It wouldn't make sense to say that the code is the bottleneck. How much can a code be optimized? I am not a programmer so maybe I am not understanding it, but the big HFT firms hire some of the best programmers in the world. I highly doubt the code is the bottleneck. What they cannot hire is the best technology. Or even if they do...something new comes out in a few days.

What Alexie is saying, does make sense though. It reconfirms my suspicion.
 
No, it's the technology. Code "ninjas" that will make your stuff run as fast as anyone else's are a dime or more a dozen, it's not exactly a forbidden or secret art. From talking to actual HFT guys, what Joy says is exactly accurate. Everyone can make their code as efficient as everyone else's, everyone trades in the HFT space based off of the same theses. Everyone is colocated (no longer an advantage but a prerequisite), everyone has a GPU farm. The startup costs into the HFT space have skyrocketed.

My take from interviewing for HFT positions and talking to people who actually do HFT: it's a technology arms race.

I've talked to at least one trader who moved from one of the top 5 algo trading firms in size to a smaller competitor because the smaller company had more aggressive policy of investment in technology.

Yeah. This is basically what everyone said at the conference. It's a technology arms race.
 
It is and it isn't. If you're looking to play in the HFT space, you're going to need to get into the FGPA, direct connectivity game. When you're developing custom hardware and high-speed software, the build times are very slow, testing is very tough to do and deployment is a pain in the but. There are plenty of shops out there making money off of slower speed, but more innovative trading strategies. In those firms, flexibility and speed of deployment is more important. They'll see an opportunity and get a strategy coded up and running and get into the market first. A few of the exchanges have also been making noise about charging per cancel message, which could kill HFT algos.

And finding quality developers who know what they're doing for HFT development is not easy.
 
Well I think we should agree upon the definition of bottleneck. If codes are highly optimized and each corporation engaged in HFT has developed such codes then it is reasonable to assume that at least big players can obtain similar technologies, so where is the bottleneck?! I liked both - Joy's and Alexei's opinions but I still think that assuming that for example one financial institution on WS has access to such technology and other has not is less logical than assuming that they differ in intellectual side. I don't think you can say that there exists one standard code (or several already made codes) which is (are) highly optimized and all of the companies have ability to obtain or create similar ones. Then why is there a high demand for computer scientists and encouragement for them to work with research teams to implement the logics they pass in the most efficient way possible?!

No. The code that the firms run is already optimized to the max. I don't think there is more optimization left. It is all about technology now from the likes of it. It wouldn't make sense to say that the code is the bottleneck. How much can a code be optimized? I am not a programmer so maybe I am not understanding it, but the big HFT firms hire some of the best programmers in the world. I highly doubt the code is the bottleneck. What they cannot hire is the best technology. Or even if they do...something new comes out in a few days.

We should chase the reason in things which make companies different. The codes one companies "best programmers in the world" write are exposed to work highly efficient given the current environment - machines, CPUs, speed, etc. but when a new technology is launched programmers should seek to suit their codes to the new technology obtained WHICH CAN AND DEFINITELY WILL BE OBTAINED BY COMPETITORS. So the technology doesn't much make them differ, but the process and efficiency of keeping up with the new technology does. That's why I think that code optimization is a crucial part of HFT and not the technology.

P.S. >
Note: In "not the technology" I meant the following... All the firms live in the same environment, so the technology given to one of them will be obtained by another - this is inevitable part of competition. So technology is not relevant in comparing them.
 
Well I think we should agree upon the definition of bottleneck. If codes are highly optimized and each corporation engaged in HFT has developed such codes then it is reasonable to assume that at least big players can obtain similar technologies, so where is the bottleneck?! I liked both - Joy's and Alexei's opinions but I still think that assuming that for example one financial institution on WS has access to such technology and other has not is less logical than assuming that they differ in intellectual side. I don't think you can say that there exists one standard code (or several already made codes) which is (are) highly optimized and all of the companies have ability to obtain or create similar ones. Then why is there a high demand for computer scientists and encouragement for them to work with research teams to implement the logics they pass in the most efficient way possible?!

We should chase the reason in things which make companies different. The codes one companies "best programmers in the world" write are exposed to work highly efficient given the current environment - machines, CPUs, speed, etc. but when a new technology is launched programmers should seek to suit their codes to the new technology obtained WHICH CAN AND DEFINITELY WILL BE OBTAINED BY COMPETITORS. So the technology doesn't much make them differ, but the process and efficiency of keeping up with the new technology does. That's why I think that code optimization is a crucial part of HFT and not the technology.

P.S. >
Note: In "not the technology" I meant the following... All the firms live in the same environment, so the technology given to one of them will be obtained by another - this is inevitable part of competition. So technology is not relevant in comparing them.
Seriously man, you provide a huge disservice by commenting on issues you know nothing about.
 
Back
Top Bottom