• C++ Programming for Financial Engineering
    Highly recommended by thousands of MFE students. Covers essential C++ topics with applications to financial engineering. Learn more Join!
    Python for Finance with Intro to Data Science
    Gain practical understanding of Python to read, understand, and write professional Python code for your first day on the job. Learn more Join!
    An Intuition-Based Options Primer for FE
    Ideal for entry level positions interviews and graduate studies, specializing in options trading arbitrage and options valuation models. Learn more Join!

Floating point precision in C++

Joined
5/9/06
Messages
296
Points
26
Hi,

I'm trying to get various viewpoints on this topic. The issue that I have is that I want my calculations to be as precise as possible. I know this comes at the cost of speed etc. I'd like to get different opinions on this so please post what you think the best method would be.

i know float gives u 7 digit precision, double gives u 15 and long double gives u 19 (is this machine dependent btw? or is it machine dependent how many bytes it uses up?). Is there more precision to be had without sacrificing too much speed? Your(quantnet community) opinions on this would be very helpful. For now I use double everywhere.
 
I'm glad you're looking at this.
More than one hiring manager has bitched to me that they see people who claim they can do numerical nalaysis who think doubles are reals.

There are several schemes for calculations.
The most mangy is fixed point, a method for using 32 bits on CPUs with no floating point hardware.
One of the regrets of my life is that the person who invented this was in my office once, and I didn't kill him before he published.

For the 3 main types of floating point, single precision and extended.
There is an IEEE standard
Floating point on Intel style CPUs is pretty much the same speed whether you pick double or single precison, especially if you[re programming C which coerces all FP operations to doubles any way.

SP isn't that great for quant calcs in current work, however in exotic high end FP (which we're looking more than ordinarily hard at), math optimised hardware like GPUs or PPUs can get you orders of magnitude, at the price of being rather harder to code up.
Real programmers are doing it in FPGAs, which resemble the clunky Intel architecture like the Delta force resembles the SAS.

The othger mainstream setup is extended floating point, which is rarely coded to in C++, but is used extensively within Excel and some other apps. It's better but slower.
Excel VB was written in C++, and hence claculations done by VB often give different results than those by cell calculations.
There's useful stuff here...
http://babbage.cs.qc.edu/courses/cs341/IEEE-754references.html

I would say that low level FP is nbot for the faint hearted. The quote I use in my C++ for interviews lecture, "You are not expected to understand this, it is included for completeness only", was from a description of error handling in this context.
 
Back
Top