Threads and Synchronization: Mutexes, Condition Variables, and Semaphores
This article covers Threads and Synchronization: Mutexes, Condition Variables, and Semaphores. A detailed, practical guide to concurrency primitives: what mutexes guarantee, how condition variables work, and how to structure correct...
Threads introduce the hardest class of bugs: issues that depend on timing.
To write correct concurrent programs, you need a solid understanding of what synchronization primitives guarantee (and what they don’t).
This post covers:
- Mutexes
- Condition variables
- Semaphores
- Common pitfalls like missed wakeups and deadlocks
1) What a mutex actually guarantees
A mutex gives you two important properties:
- Mutual exclusion: only one thread executes a critical section at a time.
- Synchronization / visibility: unlocking a mutex publishes writes in the critical section to a thread that subsequently locks the same mutex.
That visibility is often overlooked: mutexes are not only about “taking turns”; they’re also about memory ordering.
2) Condition variables: not events
A condition variable is not a message queue and not an “event you set”.
Correct usage pattern:
- Protect shared state with a mutex.
- Wait in a loop that checks the state.
- The wakeup means: “the state might have changed, re-check it”.
Why a loop?
- Spurious wakeups are permitted.
- Multiple waiters may compete.
- Your thread may wake, but the condition may no longer hold.
3) C (pthread): producer/consumer with a bounded queue
This example uses:
- a fixed-size ring buffer
- a mutex
- two condition variables (
not_empty,not_full)
#include <pthread.h>
#include <stddef.h>
#include <stdint.h>
#define QCAP 1024
typedef struct {
uint64_t buf[QCAP];
size_t head;
size_t tail;
size_t len;
pthread_mutex_t mu;
pthread_cond_t not_empty;
pthread_cond_t not_full;
} queue_t;
static void q_init(queue_t *q) {
q->head = q->tail = q->len = 0;
pthread_mutex_init(&q->mu, NULL);
pthread_cond_init(&q->not_empty, NULL);
pthread_cond_init(&q->not_full, NULL);
}
static void q_push(queue_t *q, uint64_t v) {
pthread_mutex_lock(&q->mu);
while (q->len == QCAP) {
pthread_cond_wait(&q->not_full, &q->mu);
}
q->buf[q->tail] = v;
q->tail = (q->tail + 1) % QCAP;
q->len++;
pthread_cond_signal(&q->not_empty);
pthread_mutex_unlock(&q->mu);
}
static uint64_t q_pop(queue_t *q) {
pthread_mutex_lock(&q->mu);
while (q->len == 0) {
pthread_cond_wait(&q->not_empty, &q->mu);
}
uint64_t v = q->buf[q->head];
q->head = (q->head + 1) % QCAP;
q->len--;
pthread_cond_signal(&q->not_full);
pthread_mutex_unlock(&q->mu);
return v;
}
Key details:
pthread_cond_waitatomically:- releases the mutex
- sleeps
- re-acquires the mutex before returning
4) Zig: mutex + condition variable
Zig has synchronization primitives in std.Thread.
const std = @import("std");
const Queue = struct {
const QCAP: usize = 1024;
buf: [QCAP]u64 = undefined,
head: usize = 0,
tail: usize = 0,
len: usize = 0,
mu: std.Thread.Mutex = .{},
not_empty: std.Thread.Condition = .{},
not_full: std.Thread.Condition = .{},
fn push(self: *Queue, v: u64) void {
self.mu.lock();
defer self.mu.unlock();
while (self.len == QCAP) {
self.not_full.wait(&self.mu);
}
self.buf[self.tail] = v;
self.tail = (self.tail + 1) % QCAP;
self.len += 1;
self.not_empty.signal();
}
fn pop(self: *Queue) u64 {
self.mu.lock();
defer self.mu.unlock();
while (self.len == 0) {
self.not_empty.wait(&self.mu);
}
const v = self.buf[self.head];
self.head = (self.head + 1) % QCAP;
self.len -= 1;
self.not_full.signal();
return v;
}
};
5) Rust: Mutex + Condvar
Rust’s standard library makes the “wait in a loop” pattern explicit.
use std::collections::VecDeque;
use std::sync::{Arc, Condvar, Mutex};
use std::thread;
const QCAP: usize = 1024;
fn main() {
let q = Arc::new((Mutex::new(VecDeque::<u64>::new()), Condvar::new(), Condvar::new()));
let producer_q = q.clone();
let prod = thread::spawn(move || {
for i in 0..1_000_000u64 {
let (lock, not_empty, not_full) = &*producer_q;
let mut guard = lock.lock().unwrap();
while guard.len() == QCAP {
guard = not_full.wait(guard).unwrap();
}
guard.push_back(i);
not_empty.notify_one();
}
});
let consumer_q = q.clone();
let cons = thread::spawn(move || {
let mut sum = 0u64;
for _ in 0..1_000_000u64 {
let (lock, not_empty, not_full) = &*consumer_q;
let mut guard = lock.lock().unwrap();
while guard.is_empty() {
guard = not_empty.wait(guard).unwrap();
}
let v = guard.pop_front().unwrap();
not_full.notify_one();
sum = sum.wrapping_add(v);
}
sum
});
prod.join().unwrap();
let sum = cons.join().unwrap();
println!("sum: {sum}");
}
6) Semaphores: counting permits
A semaphore tracks an integer count:
post/release: increment count and wake waiterswait/acquire: decrement count or sleep if 0
Semaphores are great when you want “N permits” semantics.
7) Deadlocks and how to prevent them
Common deadlock causes:
- Lock ordering cycles
- Holding locks while calling unknown code
- Taking multiple locks without a consistent ordering
Mitigations:
- Define a strict lock order.
- Keep critical sections small.
- Avoid blocking I/O while holding a lock.
References
- POSIX threads overview: https://man7.org/linux/man-pages/man7/pthreads.7.html
pthread_mutex_lock(3p): https://man7.org/linux/man-pages/man3/pthread_mutex_lock.3p.htmlpthread_cond_wait(3p): https://man7.org/linux/man-pages/man3/pthread_cond_wait.3p.html- Rust
MutexandCondvar: https://doc.rust-lang.org/std/sync/ - Zig
std.Threadprimitives: https://ziglang.org/documentation/master/std/#std.Thread