Plast interview

29. Two-Lock Deadlock

In a low-latency matching engine, multiple threads must update shared state behind several locks. A single stall can explode tail latency and miss market opportunities. Design synchronization to be correct and deadlock-free under extreme contention.

#include <mutex>
std::mutex m1, m2;
void f(){ std::lock_guard<std::mutex> l1(m1); std::lock_guard<std::mutex> l2(m2); }
void g(){ std::lock_guard<std::mutex> l1(m2); std::lock_guard<std::mutex> l2(m1); }

Part 1.

If f() and g() run concurrently on separate threads, what risk exists? Propose a deadlock-free change that preserves strong exception safety and minimal overhead.

Part 2.

(1) What invariant prevents deadlock when acquiring multiple mutexes?

(2) Compare std::scoped_lock(m1, m2) with std::lock plus adopt_lock.

(3) When is try_lock appropriate, and how does it affect fairness?

(4) What memory-order guarantees do mutexes provide versus atomic operations?

(5) How to detect and mitigate lock convoying or priority inversion in production?

Answer

Answer (Part 1)

Deadlock can occur because f and g acquire m1 and m2 in opposite orders, creating a cycle in the waits-for graph. Make acquisition atomic and ordered: use std::scoped_lock l(m1, m2) or std::lock(m1, m2) with adopt_lock; RAII preserves exception safety with negligible overhead.

Answer (Part 2)

(1) A global, consistent lock ordering (lock ranks) prevents cycles; an acyclic waits-for graph eliminates deadlock.

(2) std::scoped_lock atomically locks multiple mutexes and is concise and exception-safe. std::lock + adopt_lock can be explicit and marginally cheaper, but riskier.

(3) Use try_lock when work can be skipped or retried without blocking. It can reduce contention but risks starvation without backoff/fairness.

(4) Mutex lock/unlock provide full happens-before around critical sections. Atomics allow finer orderings (e.g., memory_order_release/acquire) with potentially lower cost.

(5) Observe lock hold times, contention metrics, and scheduler traces; sample stacks. Mitigate via sharding, shortening critical sections, ordering, priority inheritance, or lock-free designs.