C++ Reactive Programming
上QQ阅读APP看书,第一时间看更新

What is concurrency?

At a basic level, concurrency stands for more than one activity happening at the same time. We can correlate concurrency to many of our real-life situations, such as eating popcorn while we watch a movie or using two hands for separate functions at the same time, and so on. Well then, what is concurrency in a computer?

Computer systems were enabled to do task switching decades ago, and multitasking operating systems have been in existence for a long time. Why is there renewed interest in concurrency all of a sudden in the computing realm? The microprocessor manufacturers were increasing computing power by cramming more and more silicon into a processor. At a certain stage in the process, they could not cram more things into the same area as they reached fundamental physical limits. The CPUs of those eras had a single path of execution at a time and they were running multiple paths of instructions by switching tasks (stream of instructions). At the CPU level, only one instruction stream was getting executed, and as things happen very fast (compared to human perception), the users felt actions were happening at the same time.

Around the year 2005, Intel announced their new multicore processors (which support multiple paths of execution at the hardware level), which was a game changer. Instead of one processor doing every task by switching between them, multicore processors came as a solution to actually perform them in parallel. But this introduced another challenge to the programmers; to write their code to leverage hardware-level concurrency. Also, the issue of the actual hardware concurrency behaving differently compared to the illusion created by the task switches arose. Until the multicore processors came to light, the chip manufacturers were in a race to increase their computing power, expecting that it might reach 10 GHz before the end of the first decade of the 21st century. As Herb Sutter said in The Free Lunch is Over (http://www.gotw.ca/publications/concurrency-ddj.htm), "If software is to take advantage of this increased computing power, it must be designed to run multiple tasks concurrently". Herb warned the programmers that those who ignored concurrency must also take that into account while writing a program.

The modern C++ standard libraries provide a set of mechanisms to support concurrency and parallelism. First and foremost, std::thread, along with the synchronization objects (such as std::mutex, std::lock_guards, std::unique_lock, std::condition_variables, and so on) empowers the programmers to write a concurrent multithreaded code using standard C++. Secondly, to use task-based parallelism (as in .NET and Java), C++ introduced the classes std::future and std::promise, which work in pairs to separate the function invocation and wait for results.

Finally, to avoid the additional overhead of managing threads, C++ introduced a class called std::async, which will be covered in detail in the following chapter where the focus of discussion will be writing lock-free concurrent programs (well, at least minimizing locks, wherever possible).

Concurrency is when two or more threads or execution paths can start, run, and complete in overlapping time periods (in some kind of interleaved execution). Parallelism means two tasks can run at the same time (like you see on a multicore CPU). Concurrency is about response time and parallelism is mostly about exploiting available resources.