This question evaluates understanding of concurrency control and lock-granularity trade-offs in concurrent storage components, including coarse-grained versus fine-grained locking, lock-free/wait-free alternatives, and their impacts on latency (particularly p99) and throughput under contention, along with pitfalls such as deadlocks, priority inversion, convoying, and starvation. Commonly asked in software engineering fundamentals and systems interviews, it assesses architectural and performance reasoning for storage engines and targets both conceptual understanding of concurrency models and practical application-level trade-offs and implementation complexity.
You are implementing a component inside a storage engine that is accessed concurrently (e.g., an in-memory index, metadata cache, or block map).
Discuss the trade-offs and propose an approach for concurrency control:
Assume you may have many threads, read-heavy workloads, and occasional writes (updates).