This lab is designed to cover lock-free concurrency in C++. We will implement two synchronization methods, Mutex-based locking ****and CAS-based busy waiting. Then we will compare their performance under different workloads and thread counts to see when and why which one is better.
Please make sure that all of your answers to questions in these labs come from work done on the Edlab environment - otherwise, they may receive inconsistent results and will not receive points.
The TA present in your lab will give a brief explanation of the various parts of this lab, but you are expected to answer all questions by yourself. Please raise your hand if you have any questions during the lab section. Questions and Parts have a number of points marked next to them to signify their weight in this lab’s final grade.
Please read through this lab and follow the instructions. After you do that, visit Gradescope and complete the questions associated with this lab by the assigned due date.
Once you have logged in to Edlab, you can clone this repo using
git clone <https://github.com/umass-cs-377/lock-free-lab.git>
Then you can use cd to open the directory you just cloned:
cd lock-free-lab
This repo includes a Makefile that allows you to locally compile and run all the sample code listed in this tutorial. You can compile them by running make. Feel free to modify the source files yourself. After making changes, you can run make again to build new binaries from your modified files. You can also use make clean to remove all the built files; this command is usually used when something went wrong during the compilation, so that you can start fresh.
A lock (mutex) is a synchronization mechanism that ensures that only one thread can access a shared resource at a time. When a thread acquires a lock, any other thread attempting to access the same resource must wait until the lock is released. This guarantees mutual exclusion and prevents data races. Locks are typically implemented by the operating system using kernel support. When contention occurs, the OS can put waiting threads to sleep and wake them later, which prevents unnecessary CPU use.
Locks are conceptually simple and are appropriate when the protected work inside the critical section is relatively large or involves multiple shared variables. However, excessive locking can cause blocking and context-switching overhead, particularly when many threads compete for the same resource. Misuse of locks can also lead to deadlocks if threads acquire locks in conflicting orders.
In contrast, lock-free concurrency avoids using traditional locks. Instead, it relies on hardware-supported atomic operations that guarantee an update happens as an indivisible step. If two threads attempt to update a variable at the same time, only one will succeed, and the other can retry until it succeeds. A common pattern is the compare-and-swap (CAS) loop, which repeatedly attempts to change a value only if it has not been modified since it was last read.
Today, we want you to explore both approaches in practice and observe which one performs better under different conditions. By experimenting with various scenarios, you will see how locks and lock-free mechanisms work.
A mutex (short for mutual exclusion) is a synchronization primitive used to protect shared data from concurrent access. It ensures that only one thread can execute a particular section of code at a time. When a thread calls pthread_mutex_lock(), it attempts to acquire ownership of the mutex.