Concurrency
Last updated
Last updated
The concurrency support in C++ makes it possible for a program to execute multiple threads in parallel. Concurrency was first introduced into the standard with C++11. Since then, new concurrency features have been added with each new standard update, such as in C++14 and C++17. Before C++11, concurrent behavior had to be implemented using native concurrency support from the OS, using POSIX Threads, or third-party libraries such as BOOST. The standardization of concurrency in C++ now makes it possible to develop cross-platform concurrent programs, which is as significant improvement that saves time and reduces error proneness. Concurrency in C++ is provided by the thread support library, which can be accessed by including the header.
A running program consists of at least one thread. When the main function is executed, we refer to it as the "main thread". Threads are uniquely identified by their thread ID, which can be particularly useful for debugging a program. The code on the right prints the thread identifier of the main thread and outputs it to the console:
These are the results when run:
You can compile this code from the terminal in the lower right using g++
as follows:
and run it with
Note: The actual thread id and process exit message will vary from machine to machine.
Also, it is possible to retrieve the number of available CPU cores of a system. The example on the right prints the number of CPU cores to the console.
These are the results from a local machine at the time of writing:
Try running this code to see what results you get!
In this section, we will start a second thread in addition to the main thread of our program. To do this, we need to construct a thread object and pass it the function we want to be executed by the thread. Once the thread enters the runnable state, the execution of the associated thread function may start at any point in time.
After the thread object has been constructed, the main thread will continue and execute the remaining instructions until it reaches the end and returns. It is possible that by this point in time, the thread will also have finished. But if this is not the case, the main program will terminate and the resources of the associated process will be freed by the OS. As the thread exists within the process, it can no longer access those resources and thus not finish its execution as intended.
To prevent this from happening and have the main program wait for the thread to finish the execution of the thread function, we need to call join()
on the thread object. This call will only return when the thread reaches the end of the thread function and block the main thread until then.
The code on the right shows how to use join()
to ensure that main()
waits for the thread t
to finish its operations before returning. It uses the function sleep_for()
, which pauses the execution of the respective threads for a specified amount of time. The idea is to simulate some work to be done in the respective threads of execution.
To compile this code with g++
, you will need to use the -pthread
flag. pthread
adds support for multithreading with the pthreads library, and the option sets flags for both the preprocessor and linker:
Note: If you compile without the -pthread
flag, you will see an error of the form: undefined reference to pthread_create
. You will need to use the -pthread
flag for all other multithreaded examples in this course going forward.
The code produces the following output:
Not surprisingly, the main function finishes before the thread because the delay inserted into the thread function is much larger than in the main path of execution. The call to join()
at the end of the main function ensures that it will not prematurely return. As an experiment, comment out t.join()
and execute the program. What do you expect will happen?
One very important trait of concurrent programs is their non-deterministic behavior. It can not be predicted which thread the scheduler will execute at which point in time. In the code on the right, the amount of work to be performed both in the thread function and in main has been split into two separate jobs.
The console output shows that the work packages in both threads have been interleaved with the first package being performed before the second package.
Interestingly, when executed on my local machine, the order of execution has changed. Now, instead of finishing the second work package in the thread first, main gets there first.
Executing the code several times more shows that the two versions of program output interchange in a seemingly random manner. This element of randomness is an important characteristic of concurrent programs and we have to take measures to deal with it in a controlled way that prevent unwanted behavior or even program crashes.
Reminder: You will need to use the -pthread
flag when compiling this code, just as you did with the previous example. This flag will be needed for all future multithreaded programs in this course as well.
In the previous example, the order of execution is determined by the scheduler. If we wanted to ensure that the thread function completed its work before the main function started its own work (because it might be waiting for a result to be available), we could achieve this by repositioning the call to join.
In the file on the right, the .join()
has been moved to before the work in main()
. The order of execution now always looks like the following:
In later sections of this course, we will make extended use of the join() function to carefully control the flow of execution in our programs and to ensure that results of thread functions are available and complete where we need them to be.
Let us now take a look at what happens if we don’t join a thread before its destructor is called. When we comment out join in the example above and then run the program again, it aborts with an error. The reason why this is done is that the designers of the C++ standard wanted to make debugging a multi-threaded program easier: Having the program crash forces the programer to remember joining the threads that are created in a proper way. Such a hard error is usually much easier to detect than soft errors that do not show themselves so obviously.
There are some situations however, where it might make sense to not wait for a thread to finish its work. This can be achieved by "detaching" the thread, by which the internal state variable "joinable" is set to "false". This works by calling the detach()
method on the thread. The destructor of a detached thread does nothing: It neither blocks nor does it terminate the thread. In the following example, detach is called on the thread object, which causes the main thread to immediately continue until it reaches the end of the program code and returns. Note that a detached thread can not be joined ever again.
You can run the code above using example_6.cpp
over on the right side of the screen.
Programmers should be very careful though when using the detach()
-method. You have to make sure that the thread does not access any data that might get out of scope or be deleted. Also, we do not want our program to terminate with threads still running. Should this happen, such threads will be terminated very harshly without giving them the chance to properly clean up their resources - what would usually happen in the destructor. So a well-designed program usually has a well-designed mechanism for joining all threads before exiting.
In the code on the right, you will find a thread function called threadFunctionEven
, which is passed to a thread t
. In this example, the thread is immediately detached after creation. To ensure main does not quit before the thread is finished with its work, there is a sleep_for
call at the end of main.
Please create a new function called threadFunctionOdd
that outputs the string "Odd threadn". Then write a for-loop that starts 6 threads and immediately detaches them. Based on wether the increment variable is even or odd, you should pass the respective function to the thread.
Run the program several times and look the console output. What do you observe? As a second experiment, comment out the sleep_for
function in the main thread. What happens to the detached threads in this case?
In the previous section, we have seen that one way to pass arguments in to the thread function is to package them in a class using the function call operator. Even though this worked well, it would be very cumbersome to write a special class every time we need to pass data to a thread. We can also use a Lambda that captures the arguments and then calls the function. But there is a simpler way: The thread constructor may be called with a function and all its arguments. That is possible because the thread constructor is a variadic template that takes multiple arguments.
Before C++11, classes and functions could only accept a fixed number of arguments, which had to be specified during the first declaration. With variadic templates it is possible to include any number of arguments of any type.
As seen in the code example above, a first thread object is constructed by passing it the function printID
and an integer argument. Then, a second thread object is constructed with a function printIDAndName
, which requires an integer and a string parameter. If only a single argument was provided to the thread when calling printIDAndName
, a compiler error would occur (see std::thread t3
in the example) - which is the same type checking we would get when calling the function directly.
There is one more difference between calling a function directly and passing it to a thread: With the former, arguments may be passed by value, by reference or by using move semantics - depending on the signature of the function. When calling a function using a variadic template, the arguments are by default either moved or copied - depending on wether they are rvalues or lvalues. There are ways however which allow us to overwrite this behavior. If you want to move an lvalue for example, we can call std::move
. In the following example, two threads are started, each with a different string as a parameter. With t1
, the string name1 is copied by value, which allows us to print name1 even after join
has been called. The second string name2
is passed to the thread function using move semantics, which means that it is not available any more after join
has been called on t2
.
The console output shows how using copy-by-value and std::move
affect the string parameters:
In the following example, the signature of the thread function is modified to take a non-const reference to the string instead.
When passing the string variable name to the thread function, we need to explicitly mark it as a reference, so the compiler will treat it as such. This can be done by using the std::ref
function. In the console output it becomes clear that the string has been successfully modified within the thread function before being passed to main
.
Even though the code works, we are now sharing mutable data between threads - which will be something we discuss in later sections of this course as a primary source for concurrency bugs.
In the previous sections, you have seen how to start threads with functions and function objects, with and without additional arguments. Also, you now know how to pass arguments to a thread function by reference. But what if we wish to run a member function other than the function call operator, such as a member function of an existing object? Luckily, the C++ library can handle this use-case: For calling member functions, the std::thread
function requires an additional argument for the object on which to invoke the member function.
In the example above, the Vehicle object v1 is passed to the thread function by value, thus a copy is made which does not affect the „original“ living in the main thread. Changes to its member variable _id
will thus not show when printing calling printID()
later in main. The second Vehicle
object v2
is instead passed by reference. Therefore, changes to its _id
variable will also be visible in the main
thread - hence the following console output:
In the previous example, we have to ensure that the existence of v2
outlives the completion of the thread t2
- otherwise there will be an attempt to access an invalidated memory address. An alternative is to use a heap-allocated object and a reference-counted pointer such as std::shared_ptr<Vehicle>
to ensure that the object lives as long as it takes the thread to finish its work. The following example shows how this can be implemented:
In this section, we want to explore the influence of the number of threads on the performance of a program with respect to its overall runtime. The example below has a thread function called "workerThread" which contains a loop with an adjustable number of cycles in which a mathematical operation is performed.
In main()
, a for-loop starts a configurable number of tasks that can either be executed synchronously or asynchronously. As an experiment, we will now use a number of different parameter settings to execute the program and evaluate the time it takes to finish the computations. The idea is to gauge the effect of the number of threads on the overall runtime:
In main()
, a for-loop starts a configurable number of tasks that can either be executed synchronously or asynchronously. As an experiment, we will now use a number of different parameter settings to execute the program and evaluate the time it takes to finish the computations. The idea is to gauge the effect of the number of threads on the overall runtime:
int nLoops = 1e7 , nThreads = 4 , std::launch::async
With this set of parameters, the high workload is computed in parallel, with an overall runtime of ~45 milliseconds.
int nLoops = 1e7 , nThreads = 5 , std::launch::deferred
The difference to the first set of parameters is the synchronous execution of the tasks - all computations are performed sequentially - with an overall runtime of ~126 milliseconds. While impressive with regard to the achieved speed-up, the relative runtime advantage of setting 1 to this settings is at a factor of ~2.8 on a 4-core machine.
int nLoops = 10 , nThreads = 5 , std::launch::async
In this parameter setting, the tasks are run in parallel again but with a significantly lower number of computations: The thread function now computes only 10 square roots where with settings 1 and 2 a total of 10.000.000 square roots were computed. The overall runtime of this example therefore is significantly lower with only ~3 milliseconds.
int nLoops = 10 , nThreads = 5 , std::launch::deferred
In this last example, the same 10 square roots are computed sequentially. Surprising, the overall runtime is at only 0.01 milliseconds - an astounding difference to the asynchronous execution and a stark reminder that starting and managing threads takes a significant amount of time. It is therefore not a general advantage if computations are performed in parallel: It must be carefully weighed with regard to the computational effort whether parallelization makes sense.