• C++ Programming for Financial Engineering
    Highly recommended by thousands of MFE students. Covers essential C++ topics with applications to financial engineering. Learn more Join!
    Python for Finance with Intro to Data Science
    Gain practical understanding of Python to read, understand, and write professional Python code for your first day on the job. Learn more Join!
    An Intuition-Based Options Primer for FE
    Ideal for entry level positions interviews and graduate studies, specializing in options trading arbitrage and options valuation models. Learn more Join!

Functional Programming

I think I've seen the Milewski future talk as well.
 
C++:
template <typename T1, typename T2, typename T3>
std::function<T2(const T1& x)> operator << (const std::function<T3(const T2& x)>& f,
    const std::function<T2(const T1& x)>& g)
{ // Composition

    return [=](const T1& x)
    {

        return  f(g(x));

    };
}

auto fC = f << g << g << 4.0*g; // Compose
It's very easy to compose functions in a monad style in C++11 101 style.
 
C++:
template <typename T1, typename T2, typename T3>
std::function<T2(const T1& x)> operator << (const std::function<T3(const T2& x)>& f,
    const std::function<T2(const T1& x)>& g)
{ // Composition

    return [=](const T1& x)
    {

        return  f(g(x));

    };
}

auto fC = f << g << g << 4.0*g; // Compose
It's very easy to compose functions in a monad style in C++11 101 style.
I wouldn't advertise this. It looks ugly as hell and lacks clarity.
 
This isn't really the canonical application of Monads, although valid. A practical example would be to pass a C style success/failure code alongside the result when each function only takes one arg (x) but outputs two (x, failure code).

One thing I've personally done is use the State Monad to shove interesting side-effect results off to a state object that's propagated alongside a normal computation (for example, as you filter down a data set, you are primarily interested in the main set, but the deleted set is interesting as well).
 
I wouldn't advertise this. It looks ugly as hell and lacks clarity.
It looks OK if you think of the maths. Anyways, the point is that we can now do function composition in C++11.

Maybe C++17 will allow us to do all these requirements directly?

The mappings/relations are:

g: T1 -> T2, f: T2 -> T3, f << g: T1 -> T2. So when using, h = f << g;

edit: My V1 code can be improved as Polter has shown.

get it working
then
get it right
then
get it optimised.

As little boiler-plate code as possible is ideal.
 
Last edited:
This isn't really the canonical application of Monads, although valid. A practical example would be to pass a C style success/failure code alongside the result when each function only takes one arg (x) but outputs two (x, failure code).

A workaround could be to use std::tuple<> for the return type. (?)
 
A workaround could be to use std::tuple<> for the return type. (?)
It's not a matter of workaround, but rather the point of Monads. Using tuple is an implementation detail.

Call f and h functions with type signature (double -> double). You have demonstrated how to compose f and h. From what I understand, this is a specific subset of the application of Monads, it does not use their full power.

But suppose f and h are each functions with signature (double -> (double, int)). How do you compose them? The bind operator of a Monad does exactly this, typically it will thread the double arg through, but tells exactly how the ints will be combined between f and h (logical and? or? something else? your choice, each one yields a different Monad).

So, f >> h actually means, change h from (double -> (double, int)) to ((double, int) -> (double, int)), and compose the resulting function with f.

However, it goes further in that the "primary" types (double in this case) don't have to match between f and h, so long as bind provides the proper transformation. Eg, we could have (double -> (double, int)) and (double -> (str, int)).

In this context, the return operator does (double -> (double, int)) so that the value can be fed into the Monadic chain. A full chain would look like

g = f >> h
(return 10.0) >> g [yields some result]
 
@Daniel Duffy:

Here's a one-liner which works for an arbitrary number of arguments and keeps their types (including the qualifiers, like const; although I think a bit more is needed for perfect forwarding) without any run-time type erasure (which is employed by std::function -- and also by many functional programming languages):

auto compose = [](auto f, auto g) { return [=](auto&& ...x) { return f(g(x...)); }; };

Inspired by (although written differently): http://yapb-soc.blogspot.com/2012/12/clang-and-generic-polymorphic-lambdas.html

Personally, I'm also quite excited about fold expressions (some really elegant code snippets!): http://baptiste-wicht.com/posts/2015/05/cpp17-fold-expressions.html
http://www.italiancpp.org/2015/06/15/folding-expressions/

As for the real-life monadic examples -- Folly Futures, boost::future, as well as the improved C++ (Concurrency TS) futures are examples of continuation monad in action:
https://code.facebook.com/posts/1661982097368498
http://www.boost.org/doc/libs/maste...tion.html#thread.synchronization.futures.then

What this buys us in practice is ease of composability: Scala's Futures (inspired by Twitter's, which in turn influenced Folly's) provide an example:

"One nice property of programs written using promises with operations described so far and futures which are composed through monadic operations without side-effects is that these programs are deterministic. Deterministic here means that, given that no exception is thrown in the program, the result of the program (values observed in the futures) will always be the same, regardless of the execution schedule of the parallel program."

See: http://docs.scala-lang.org/overviews/core/futures.html

More:

https://meetingcpp.com/tl_files/2014/talks/ivan-cukic-monads-in-chains-meetingcpp14.pdf


This is also pretty interesting:
http://www.reddit.com/r/cpp/comments/3aicse/lightweight_very_simple_single_purpose_c_monad/
 
Last edited:
Nice post, Polter. Thank you.

The new developments should make doing maths in C++ much easier.

For example, one can do fixed point stuff in nonlinear algebra although I have no idea what the compiler is doing.

https://en.wikipedia.org/wiki/Fixed-point_combinator#Y_combinatory

It also means that I can transform PDEs to (0,1) without having to create a separate class for each specific interval transformation.

In short, all this will result in clean code.

//
One issue is how to model a set of functions as one unit+ the ability to access each one individually, e.g. consider the diffusion, convection ... functions of the (generalised) BS PDE. It can be done using a set of std::function<> embedded in a dedicated class. Maybe there is a more elegant approach?
 
Last edited:
C++:
template <typename T1, typename T2>
std::function<T2 (const T1& x)> operator + (const std::function<T2 (const T1& x)>& f,
                                               const std::function<T2 (const T1& x)>& g)
{ // Vector addition of functions

        return [=] (const T1& x)
        {

            return  f(x) + g(x);
           
        };
}
We can see functions as a vector space; here is addition and is there a 'better' way?
 
C++:
    auto compose = [](auto f, auto g) { return [=](auto&& ...x) { return f(g(x...)); }; };
    auto f = [](double x) { return x*x;};
    auto g = [](double x) { return std::sqrt(x);};
    auto h = [](double x) { return std::exp(x);};

    auto v = compose(f, g);
    std::cout << v(2.0) << std::endl;

    auto hv = compose(h, v);
    std::cout << hv(2.0) << std::endl;
Here's function composition using terse, unreadable C++ 14 code. It's OK but I have not been able to do it like in mathematics f(g(h)).
 
C++:
// Composition fn Real->Complex->Complex
    auto f1 = [](double x) { return std::complex<double>(x,x);};
    auto g1 = [](const std::complex<double>& z) { return std::sqrt(z);};
    auto v1 = compose(g1, f1);
    std::cout << v1(2.0) << std::endl;
Now I take functions with heterogeneous domain and range (co-domain). Seems to work well. Very elegant.
 
Here's a one-liner which works for an arbitrary number of arguments and keeps their types (including the qualifiers, like const; although I think a bit more is needed for perfect forwarding) without any run-time type erasure (which is employed by std::function -- and also by many functional programming languages):

auto compose = [](auto f, auto g) { return [=](auto&& ...x) { return f(g(x...)); }; };

Inspired by (although written differently): http://yapb-soc.blogspot.com/2012/12/clang-and-generic-polymorphic-lambdas.html

You mention no run-time type erasure, but avoiding indirection can be crucial for some applications of compositions. For example, if composition is nested (n-1)-times, e.g. for n=4:
C++:
auto f_4 = compose(f_3, compose(f_2,  compose(f_1, f_0)));
then if I am not mistaken you will have 3 levels of indirection for each function call of f_4, unless the compiler is able to inline the function calls. With indirection it will not be feasible to use the composition within computationally intensive routines.

Are you supposed to rely on the compiler to remove indirection by inlining (I guess I am assuming that the f_i are not polymorphic function objects to make the question reasonable)?
 
@MiloRambaldi:

Yeah, inlining, usually works fine for lambdas (themselves, of course; as you noted, the usual rules apply for what's in them, including polymorphic function calls).
It's essentially the same path in an optimizing compiler implementation as the one corresponding to inlining function objects (i.e., calls to their `operator()`) -- and note that idiomatic C++98+ has been already heavily relying on that for algorithms taking predicates like `std::sort`, so there are strong incentives for this to work well. (It's always a good idea to look at the generated assembly and profile, as in general.)

This is a good question, actually, since inlining is important for a lot of modern C++ in general.
Say, also tag dispatching: http://www.boost.org/community/generic_programming.html#tag_dispatching
Note how `advance` always dispatches to an implementation in `advance_dispatch` -- which is itself a form of optimization (matching the best algorithm given the iterator category). If this introduced the cost of function calls (due to lack of inlining), then that could defeat the purpose.

However, there's a caveat to all that -- debug builds. These often won't be as optimized (it's also one of the reasons some of the game developers still avoid STL -- "fast debug builds" is one of the requirements for fast development iteration / fast time to market, and often an in-house library of domain-specific / non-generic containers is already available).

Well, that, and some people use `std::function` to store lambdas, which is a whole other story (in general: DON'T, unless you REALLY have to, and be as careful as you would be with virtual functions); although they should be at least optimizable to be faster than v-funcs:
http://stackoverflow.com/questions/...ompared-to-raw-function-pointer-and-void-this
http://stackoverflow.com/questions/13503511/sizeof-of-stdfunctionvoidint-type
 
Last edited:
You mention no run-time type erasure, but avoiding indirection can be crucial for some applications of compositions. For example, if composition is nested (n-1)-times, e.g. for n=4:
C++:
auto f_4 = compose(f_3, compose(f_2,  compose(f_1, f_0)));
then if I am not mistaken you will have 3 levels of indirection for each function call of f_4, unless the compiler is able to inline the function calls. With indirection it will not be feasible to use the composition within computationally intensive routines.

Are you supposed to rely on the compiler to remove indirection by inlining (I guess I am assuming that the f_i are not polymorphic function objects to make the question reasonable)?
More than two levels of f(g) is becoming academic.
BTW anyone care to solve it using operator overloading (like maths) instead of a compose()?
 
Last edited:
Back
Top