• C++ Programming for Financial Engineering
    Highly recommended by thousands of MFE students. Covers essential C++ topics with applications to financial engineering. Learn more Join!
    Python for Finance with Intro to Data Science
    Gain practical understanding of Python to read, understand, and write professional Python code for your first day on the job. Learn more Join!
    An Intuition-Based Options Primer for FE
    Ideal for entry level positions interviews and graduate studies, specializing in options trading arbitrage and options valuation models. Learn more Join!

Multithreading Without Synchronization: Boost C++ ASIO Strand (C++ Souce Code Available)

Joined
7/6/09
Messages
7
Points
11
When it comes to multithreaded programming, we usually start thinking of using various thread synchronization mechanisms explicitly, including use of mutexs, semaphores and conditions, etc., to control memory visibility of shared resources to multiple threads to avoid data race. For example, Boost C++ provides such mechanisms for us in its Boost.Thread.

However, in my opinion, we may be better off writing multithreaded programs for our problems without explicitly using these mechanisms, if and whenever possible. As we all know, thread synchronization comes at the cost of performance (context switches), debugging and programming complexity.

To read more about how this can be done in an executable Boost C++ multithreaded program and my line-by-line explanation of its code, please check out my blog at:
http://zhaowuluo.wordpress.com/2010/12/25/multithreading-boostasio/

Happy reading!
 
Nice write up.

Did you run any tests to confirm that the implementation behind ASIO is actually better than writing a standard concurrent queue template?
 
I think the title is sort of misleading. The synchronization is taken care of by somebody else.
 
Am I missing something here ? I see only one strand. This is more like multiplexing than multi-threading. This is like using select to read/write to socket using single thread. There is no concurrency here (producer and consumer are not active at the same time).
 
This pattern is useful for problems when you never want a thread to block, It can be used in client server apps with very many clients on slow networks and you don't want a thread for each new client. So a few threads do the job.

For compute intensive algos, lightweight threads might be more scaleable. Calling io_service is an OS call, so that's kind of slowish.

It all depends what the problem is.

Speaking of producer consumer here is one in Boost.Thread

http://www.quantnet.com/cplusplus-multithreading-boost/

Are you proposing asio for heavy computation? What would be interesting is 1) compare asio with bog serial solution and 2)asio versus Boost.Thread.
 
When it comes to multithreaded programming, we usually start thinking of using various thread synchronization mechanisms explicitly, including use of mutexs, semaphores and conditions, etc., to control memory visibility of shared resources to multiple threads to avoid data race. For example, Boost C++ provides such mechanisms for us in its Boost.Thread.

This is one approach, but unfortunato it (can) eventually leads to tears in practice. Initial enthusiasm gives way to other emotions when code is being debugged. In many cases management decides to go back to serial code...

The optimal way is to find potential concurrency in a problem by using task and/or data decomposition and then moving to parallel patterns (e.g. Master/Worker, PC, loop parallelism) and then thread implementation.

And thinking of 'parallel objects' is awkward.

Mattson 2005 has a good book on this process.
 
Back
Top