• C++ Programming for Financial Engineering
    Highly recommended by thousands of MFE students. Covers essential C++ topics with applications to financial engineering. Learn more Join!
    Python for Finance with Intro to Data Science
    Gain practical understanding of Python to read, understand, and write professional Python code for your first day on the job. Learn more Join!
    An Intuition-Based Options Primer for FE
    Ideal for entry level positions interviews and graduate studies, specializing in options trading arbitrage and options valuation models. Learn more Join!

Assembler language

Joined
11/29/11
Messages
59
Points
18
Hello everyone!

I'm interested in learning an assembly language to gain the fastest performance in algos and wonder how long would it take to learn some basics and move to advanced level. Im also interested in hearing experts opinions about benefits and relevance of learning such a dead language. It would be good if you pointed to some sources to study as well. What would you advice? Thank you all
 
Thank you KaiRu. Can I build application on it which can be distributed to many computers at the same time? Assembler programming is very platform specific. So do you think it is worth starting to build algos on it while it can only be deployed to few devices?
 
Yes, unless you use CPU specific instructions (SSE/SSE2 etc). However, if you don't use CPU specific instructions the point of writing in assembly language converges to zero.
 
So before getting started with the link you pointed above, one should learn how hardwares work and pin down a particular CPU to target in future. One more question: Are CPU specific codes hell different from each other? I mean, if I wrote a code on CPU1, is it the same time taking job to write the same algo on CPU2?
 
Not really, you just should be aware of this and keep it all in your head what is very tiresome - just look at these 300-400 pages manuals from AMD and Intel.
 
Hello everyone!

I'm interested in learning an assembly language to gain the fastest performance in algos and wonder how long would it take to learn some basics and move to advanced level. Im also interested in hearing experts opinions about benefits and relevance of learning such a dead language. It would be good if you pointed to some sources to study as well. What would you advice? Thank you all
Assembly cannot be dead. A cpu IS assembly... That being said, I believe the best way for a newbie to get a solid understanding is to compile C code, and step through the resulting assembly...
 
Assembly cannot be dead. A cpu IS assembly... That being said, I believe the best way for a newbie to get a solid understanding is to compile C code, and step through the resulting assembly...

Then what is benefit of learning assembly itself? Why bother dealing with uglier script if there exists C on top of it? Im asking because Im new to this topic. Thank you
 
If need advice and/or motivation from others to learn something then ... you don't really need it :)
 
If need advice and/or motivation from others to learn something then ... you don't really need it :)

Im really interested in it and see some benefits for speed. Have motivation as well. But have to face some downsides now rather than after having started and then realize it is not that useful I thought. For example, are there employment opportunities for it in quant field? How long does it take to master the language? Can one build commercial applications with assembly language? Is huge difficulty and time involved in programming CPU-to-CPU? etc.
 
At the risk of starting another language war, I must say that I don't believe as strongly as KaiRu that FPGAs are the future, certainly they are a part of the future, but GPUs are better for some tasks.
It is the case that more HFT is done in C/C++ than GPUs and FPGAs combined, some even use Java.
FPGAs simply take longer to be changed even if they execute faster and currently the automated generation produces output that's rather less optimal than hand build.

As for the question of why learn Assembler, I teach an outline of this on the CQF C++ course because if you understand what the machine is doing you can write better code. Teaching multi-threading is actually easier if you take a mechanical view of sets of registers iterating through memory spaces, as one example.

Also there is a requirement for people to program network cards and/or device drivers, much of this can be done in C but the standard required for HFT mean that you're unlikely to be great without some knowledge of assembler, indeed there is a synergy between FPGAs and assembler skills.
Note I'm not saying that FPGAs are doomed or an unwise choice, just that they are a choice not single inevitability.

I encounter people who don't know what a processor stack is and for whom looking at the registers to try and understand why a nasty calculation bug is happening is beyond them. Some don't know the difference between read only and constant and find it hard to imagine how their high level language code gets translated. Most of us never need to know all that much assembler and it's pretty rare now that I write any, but the analogy I'd draw is that an engineer might usefully at a high level model a bridge as a network of vectors but unless he knows about the crystalline structure of metals then metal fatigue will come as a nasty shock.

That forms part of a definition of an expert: one who can choose between multiple levels of abstraction in his work. A bridge is vectors, metals, a node in a transport network, a series of discounted cash flows, a terrorist target and a tourist attraction.
If you want to be the guy in charge of bridges you need all of those views, ditto HFT where one mixes market microstructure, C++, poker,Level Zero ethernet, poker and internal politics over who gets the bonus
As a career counsellor a common problem I try to help people fix is that over specialisation has left them unappreciated and sometime vulnerable to changes in fashion that leaves their speciality in the cold.But just in case you think that's easy, recall that to get a job you must also show excellence in a speciality.
 
I encounter people who don't know what a processor stack is and for whom looking at the registers to try and understand why a nasty calculation bug is happening is beyond them.

Could you be more specific on bugs that can be figured out by looking at the registers? Perhaps an example would make it clear.
 
Could you be more specific on bugs that can be figured out by looking at the registers? Perhaps an example would make it clear.
say you are using a 3rd party library, with no access to the source code, and you cannot for the life of you find any bug in your own code....

Try creating self-modifying or obfuscated code in a higher level language without at least debugging in assembly...
 
Wouldn't it be better (performance/speed) to write the same HFT algos on assembler rather than on C/C++?

Not necessarily. It's a common misconception that low-level languages (like assembly, or even C/C++) always provide higher performance than higher-level and/or garbage-collected languages like C# or Java. Are there cases where you can improve performance by coding something in assembly? Of course. If you've already written some code, ran into performance issues, spent time profiling/tuning your code, and then you're still not happy with the performance -- then you *might* consider porting that code to assembly. Even in this case, there still may be other options; assembly is generally the language of last resort. Have you seen LMAX Disruptor? It's an HFT event-processing loop which is very low-latency, taking into account things like the size of the cache lines on the CPU. It's also written in Java.
 
Not necessarily. It's a common misconception that low-level languages (like assembly, or even C/C++) always provide higher performance than higher-level and/or garbage-collected languages like C# or Java. Are there cases where you can improve performance by coding something in assembly? Of course. If you've already written some code, ran into performance issues, spent time profiling/tuning your code, and then you're still not happy with the performance -- then you *might* consider porting that code to assembly. Even in this case, there still may be other options; assembly is generally the language of last resort. Have you seen LMAX Disruptor? It's an HFT event-processing loop which is very low-latency, taking into account things like the size of the cache lines on the CPU. It's also written in Java.


Interesting. And I have one more question. How can I write a code in C# or Java as you mentioned that will be very low latency? Is it possible? Loops in C# and Java suck and require huge amount of time resources on good (not super-)machine if you are dealing big data. So...?
 
Like I said, check out LMAX Disruptor for a really good example of writing low-latency managed code:
https://code.google.com/p/disruptor/

Loops in C# and Java don't suck and don't require huge amount of time resources. If you find some C# or Java code which is slow, take some time to look at how the code is written and what it's attempting to do; chances are the real slowdown is due to several layers of poorly-written abstractions. Or, you may have seen code which is using iterators (a 'foreach') instead of a real for loop -- which can really hurt performance if not used correctly.

If you're seriously interested in writing this kind of low-latency managed code, then you need to learn how to use machine-level profiling tools (e.g., Intel vTune), along with everything you can about the Java/C# languages, various JVM or CLR implementations, and get as much experience as you can writing code in these languages so you can learn about any performance pitfalls due to flaws in the implementations of any base class libraries -- and how to work around them.

So, while writing this kind of low-latency managed code is totally feasible, it does require a fair amount of extra learning effort on top of what you'd already need to know to write the same thing in C or C++.
 
Like I said, check out LMAX Disruptor for a really good example of writing low-latency managed code:
https://code.google.com/p/disruptor/

Loops in C# and Java don't suck and don't require huge amount of time resources. If you find some C# or Java code which is slow, take some time to look at how the code is written and what it's attempting to do; chances are the real slowdown is due to several layers of poorly-written abstractions. Or, you may have seen code which is using iterators (a 'foreach') instead of a real for loop -- which can really hurt performance if not used correctly.

If you're seriously interested in writing this kind of low-latency managed code, then you need to learn how to use machine-level profiling tools (e.g., Intel vTune), along with everything you can about the Java/C# languages, various JVM or CLR implementations, and get as much experience as you can writing code in these languages so you can learn about any performance pitfalls due to flaws in the implementations of any base class libraries -- and how to work around them.

So, while writing this kind of low-latency managed code is totally feasible, it does require a fair amount of extra learning effort on top of what you'd already need to know to write the same thing in C or C++.

Thank you JKPappas for that info and link. Im trying to get as much interesting insights as possible.
 
Back
Top