• C++ Programming for Financial Engineering
    Highly recommended by thousands of MFE students. Covers essential C++ topics with applications to financial engineering. Learn more Join!
    Python for Finance with Intro to Data Science
    Gain practical understanding of Python to read, understand, and write professional Python code for your first day on the job. Learn more Join!
    An Intuition-Based Options Primer for FE
    Ideal for entry level positions interviews and graduate studies, specializing in options trading arbitrage and options valuation models. Learn more Join!

Best Programming first language

Great discussion here. Had to weigh win / rant. Note: I am still a student, and by no means an expert.

My experience: I first started learning C and then C++ (started, but haven't finished...)

I had a great time learning, up until I hit the subject of Classes (aka OO). What a pain. Why would I ever use classes, when I could write a function that does the job? Input in, output out. So I thought at the time.

Why is it a pain? Private/public/protected. Setters/getters. Interface / implementation files (let's have some fun with copy/paste!).

This all gets in the way of how useful OO really is.

Seriously. Let's make all state variables private, because that's best practice. OK, now opt-out those variables you want to actually use by creating getters/setters. Just do it, because everybody does. Wait, what?

Screw that. Save encapsulation for later. I'm a functions guy from now on.

And's that how I existed, until along came Python. Holy.

Now that's how OO works. I get it now.

C++ is more intimdating than Python because it is a huge language and difficult for a novice to know what is and and what is not useful.

However, the language is less important than choosing an area/domain where you can easily find objects and classes. For me in the past 2-d and 3-d computer graphics were a good way. In fact OO started life in this area. There are many others.

20 years ago some folk claimed Smalltalk was better for learning OOP

http://en.wikipedia.org/wiki/Smalltalk

Now Python is the new Smalltalk?
 
Cuch, are you insisting that C++ is the best first programming language? If so I think you're hopelessly out of touch. You would surely have to concede that C is better than C++ for a first language (since C is a 'beginning' or non-OOP subset of C++ anyway, pretty much). And then one can give 100 reasons why C isn't a good first programming language either. See for instance: http://www.tcm.phy.cam.ac.uk/~mjr/C/hello_world.html
 
Cuch, are you insisting that C++ is the best first programming language? If so I think you're hopelessly out of touch. You would surely have to concede that C is better than C++ for a first language (since C is a 'beginning' or non-OOP subset of C++ anyway, pretty much). And then one can give 100 reasons why C isn't a good first programming language either. See for instance: http://www.tcm.phy.cam.ac.uk/~mjr/C/hello_world.html

Barny, whatever gave you that idea? Did you read my posts?

The religious war between C and FORTRAN lovers is an old one, and those of us supporting FORTRAN are often accused of living in an era somewhere between the 1960's and the age of the dinosaurs. Although I do write code in both C and FORTRAN, and even write numerical code in C, this page highlights some of the issues of which good C programmers should be aware.

This is really past tense. :rolleyes:

I doubt that 'out of touch' is an issue;) Do you know how many programmers I talk to on a regular basis:) Mostly C++, C# but never FORTRAN.

BTW your link is 13 years old! And worse, the real title is here http://www.tcm.phy.cam.ac.uk/~mjr/C/C_rant.html. I agree that Fortran is better than C for numerics but this is like whipping a dead horse. I know no one any more who use Fortran, but I know thousands of C programmers.

Unfortunately, again we are seeing subjective rants. Pity.
Learn C, for sure and C++. Then C#, Java, Python will follow.
 
I know no one any more who use Fortran, but I know thousands of C programmers.

That's probably because you're in touch only with people from this narrow field of quant finance, and for whatever reason people in this field have strong bias towards C/C++... Many people doing some serious work (Oil&Gas, CFD, CEM, and any other scientific computing discipline) are happily using Fortran on everyday basis, and are making good money on its use. Do you think that such a large Fortran ecosystem would exist if there isn't demand? Take compilers for example, there exist several independent Fortran compiler vendors (like Portland Group, NAG, Lahey, Absoft), and also almost every important hardware manufacturer (like Intel, IBM, Sun, etc.) is/was offering Fortran compiler of its own.

I remember in my first touch with quant finance, I had to implement GPU version of PDE solver. I had existing C++ CPU code at my disposal, product of several weeks of work of a programmer better than myself; this C++ code would make even hardest to please C++ purist happy: loads of expression templates use, beautiful class hierarchies (imagine differential operator as abstract base class in the root of corresponding tree of classes), etc. When I completed my first, very dumb, GPU version of the solver implementation, I ran it side-to-side with CPU version, and was very surprised to learn that even this dumb GPU version was already much faster. So I realized that I need better CPU version for fair comparison, and I cranked up Fortran solver in couple hours, that was 10 times simpler, and 2000 times faster than C++ solver... Moral of the story: C++ is OK as a language of choice for many aspects of the work (in particular, as there exist loads of good resources around it, and also number of people around knowledgeable in C++), but it's not meant for any numerical work unless you're hard-core C++ expert like for example people that wrote Eigen or alike libraries. But people with this level of C++ skills are very, very rare; and even their work (which I'll readily admit is fascinating) is practically useless as it's not prone to parallelization (and parallelization is the name of the game for any serious number crunching work today), and also it has not much to do with C++ per se anyway, as it's based on templating mechanisms, and could be in principle implemented equally well in any other templating system that is able to generate compilable code at the end.
 
You've completely missed the point. This is not FORTRAN vs C. The question is, do you think C++ is the best first language to learn? That is - introducing someone to the concepts of loops, functions, control-flow, (OOP?)?
 
That's probably because you're in touch only with people from this narrow field of quant finance, and for whatever reason people in this field have strong bias towards C/C++... Many people doing some serious work (Oil&Gas, CFD, CEM, and any other scientific computing discipline) are happily using Fortran on everyday basis, and are making good money on its use. Do you think that such a large Fortran ecosystem would exist if there isn't demand? Take compilers for example, there exist several independent Fortran compiler vendors (like Portland Group, NAG, Lahey, Absoft), and also almost every important hardware manufacturer (like Intel, IBM, Sun, etc.) is/was offering Fortran compiler of its own.

Indeed! I used to write FEM codes in Fortran oil/gas and semiconductors in the ole days. But AFAIK Fortran is very niche in QF. Many Fortran applications have also been ported to C.

I am casting no aspersions on Fortran. Good language for arrays.

edit: On the other hand, I have seen that some Fortran developers are reticent to make the move to C.
 
NO.
First C. Then C++.

As was pointed out above, the pure performance game has moved on to FPGAs, GPUs, and related specialized domains where electrical engineering and physics take precedence over language selection.

C++ is neither the fastest nor the most productive.

Ironically, to address its shortcomings, especially with productivity...

C++ is becoming harder to learn.

...

Well, since we can't remove any significant features from C++ without breaking large amounts of code, C++11 will be larger that C++98, so if you want to know every rule, learning C++11 will be harder.

...

That is directly at odds with programmer productivity. Programmer productivity is better facilitated in higher level languages.

...

In this article we propose a new approach for implementing option pricing models in finance. Financial engineers typically prototype such models in an interactive language (such as Matlab) and then use a compiled language such as C/C++ for production systems. Code is therefore written twice. In this article we show that the Python programming language and the Cython compiler allows prototyping in a Matlab-like manner, followed by direct generation of optimized C code with very minor code modifications. The approach is able to call upon powerful scientific libraries, uses only open source tools, and is free of any licensing costs. We provide examples where Cython speeds up a prototype version by over 500 times. These performance gains in conjunction with vast savings in programmer time make the approach very promising.

...

That's a remarkable feat for a language that's easy enough to be taught to children.
 
I agree, it's getting harder to keep up with C++.

That is directly at odds with programmer productivity. Programmer productivity is better facilitated in higher level languages.

The examples here are just toy examples. Binomial trees are not what you would call state of the art. It's rather trivial. A Heston Pde in any language, then you're talking.
 
I remember in my first touch with quant finance, I had to implement GPU version of PDE solver.

I have no experience with GPU and PDE. But from the grapevine it does not seem to work (a single PDE is not SPMD).
 
I agree, it's getting harder to keep up with C++.

That is directly at odds with programmer productivity. Programmer productivity is better facilitated in higher level languages.

The examples here are just toy examples. Binomial trees are not what you would call state of the art. It's rather trivial. A Heston Pde in any language, then you're talking.

Are you going to back that up with evidence?

Or are you claiming that some languages are more Turing complete than others? ;)
 
Are you going to back that up with evidence?

Or are you claiming that some languages are more Turing complete than others? ;)

Back up what? First or second remark:)

Basically, to show how good a language is you need to test it on challenging problems (e.g. not binomial).

I would never say that, way above my head.
 
I have no experience with GPU and PDE. But from the grapevine it does not seem to work (a single PDE is not SPMD).

(Sorry for another distraction from the thread subject.)

I should have been more specific: In this particular case, it was about solving PDE using finite differences approach, implicit variation; which in turn boils down to coming up with fast sparse solver implementation. There are loads of interest and efforts to implement these on GPUs. Again in this particular case, it was about tri-diagonal solver, and parallelization was achieved both for individual solver, and also through doing many problems (like in pricing multiple options at once) on GPUs at once.

Side note: One should not think about accelerators (GPU, FPGA, etc.) in "SPMD" and alike archaic terms - this whole classification invented by Michael J. Quinn is way outdated, and there exist number of more contemporary methods of design and analysis of parallel programs. But most of the time you don't think much about theory; instead, you look at the accelerator programming model, and you do your best to map your algorithm onto. Sometimes you don't get much of speedup improvement, but oftentimes you actually do (and sometimes these speedups are spectacular).
 
I should have been more specific: In this particular case, it was about solving PDE using finite differences approach, implicit variation; which in turn boils down to coming up with fast sparse solver implementation. There are loads of interest and efforts to implement these on GPUs. Again in this particular case, it was about tri-diagonal solver, and parallelization was achieved both for individual solver, and also through doing many problems (like in pricing multiple options at once) on GPUs at once.

The ADE method does not need an LU solver; it is explicit and unconditionally stable. ADI is less amenable to GPU and slower. AFAIK GPU only supports SPMD (SIMD if you like) pattern.

Here is a link t ADE http://www.wilmott.com/messageview.cfm?catid=44&threadid=92323

Multiple options in parallel is kind of easy.

Side note: One should not think about accelerators (GPU, FPGA, etc.) in "SPMD" and alike archaic terms - this whole classification invented by Michael J. Quinn is way outdated

Well, no. The book by Mattson et al discuss many parallel design patterns.
 
The ADE method does not need an LU solver; it is explicit and unconditionally stable.

Great. Why don't you implement on an accelerator architecture, and profit?

My point: I was asked to implement fastest possible FD solver on GPU, so I did it (and, as an aside, sort of re-assured myself that C++ is crap for any sort of numerical work - and this is why I mentioned this whole project here). If I was said to evaluate possible approaches for solving PDE, then hopefully I'd run into this stuff while researching, and would take ADE into account too. But this sort of decision was not mine in this particular case (note also that all of this happens 5 years ago), so I did what I was said to do, and picked up my money (and my employer also picked up its money along the way too, as GPU based solution was several dozen times faster than CPU based solution they used previously).

Well, no. The book by Mattson et al discuss many parallel design patterns.

This book is outdated too (8 years is a lot in the HPC), and all this "parallel programming patterns" stuff was never well received by parallel programming community anyway (most of the time, it was about trying to make lots of noise about re-hashing the obvious).
 
At this stage the mathematical foundations (lots to do) for ADE are being worked, so GPU and ADE is not yet the right moment. And the PDEs are small, so sequentual is aleady 40% faster than ADI.
 
This book (Mattson et al) is outdated too (8 years is a lot in the HPC), and all this "parallel programming patterns" stuff was never well received by parallel programming community anyway (most of the time, it was about trying to make lots of noise about re-hashing the obvious).[/quote]

This a sweeping statement but I'll take your word for it that this is so for HPC. In general it is cerainly not true, and in particular desktop applications in C++ and C# on multicore processors. The libraries TBB, PPL and TPL all bear witness to this.

What is known is(was?) that the efficiency of GPUs ia 80% less than possible if the problem is not SIMD.

Just to mention: Master/Worker. Producer-Consumer, SIMD (what GPU uses) etc. are far from obsolete IMO. Maybe there are better ones in the meantime?
 
This a sweeping statement but I'll take your word for it that this is so for HPC. In general it is cerainly not true, and in particular desktop applications in C++ and C# on multicore processors. The libraries TBB, PPL and TPL all bear witness to this.

When I mentioned "re-hashing the obvious" above, I meant on patterns for parallel programming, not on design patterns in general. I remember when first reading GoF book, there was lots of new and valuable stuff for me in-there, but when skimming through Mattson book, it was all the time just like when encountering iterators chapter in GoF book - "c'mon, everyone knows that...". Ditto for any decently knowledgeable parallel programmer.

As far as libraries you mentioned above concerned: these are all completely irrelevant for HPC work. HPC people do OpenMP, MPI, then CUDA or OpenCL, and eventually VHDL or Verilog for FPGA stuff. Libraries like TBB or TPL or PPL (I've heard for this one for the first time now that you mentioned it) are geared towards programmers of desktop applications and alike stuff, to achieve smaller-scale parallelism. Most of what these libraries cover is doable through simple POSIX threads API, but desktop applications programmers are just incapable to handle pthreads, so they have to wrap it up somehow for them, in order that they don't shoot their own foot (and they also want to lock them up with their product portfolio). But these are not usable for any massive parallelism work. And mentioning Microsoft in context of any HPC related stuff will just get you good laugh anyway.

What is known is(was?) that the efficiency of GPUs ia 80% less than possible if the problem is not SIMD.

As mentioned in one of my previous messages, you should really update yourself on accelerators architecture and capabilities. Thinking about accelerators in terms "oh, it's SIMD, so it cannot be used for anything" is like that you read some clueless editorial on GPU compute architecture back in 2007 when CUDA was released for the first time, and kept thinking that this is true. Accelerators programming model is not MIMD indeed, but it's still much more capable than SIMD. In particular, with CUDA it's indeed that the best performance is achieved when groups of 32 threads are executing the same code path, but different groups could safely execute completely different code paths without any performance penalties (except for that initial test on which code path to take); and CUDA works best when you launch thousands of threads at once.
 
For good or bad, the Microsoft products are extremely important. Especially Excel integration.
Each to his own.

HPC is a niche all to its self. It demands special skills and disposition. You have HPC programmer != application progrmmer.
 
For good or bad, the Microsoft products are extremely important.

Never claimed that they are not important, I just said that they are completely and utterly irrelevant for serious number crunching work.

HPC is a niche all to its self. It demands special skills and disposition. You have HPC programmer != application progrmmer.

Well:
"Quant finance is a niche all to its self. It demands special skills and disposition. You have quant developer/analyst/whatever != <paste-any-other-domain-here> developer/analyst/whatever."
Oh, and to be more precise:
HPC programmer >> application programmer​
 
Back
Top