Design scalable LRU cache and backtracking API
Company: Adyen
Role: Software Engineer
Category: System Design
Difficulty: hard
Interview Round: Technical Screen
Build two production-ready services: A) A thread-safe LRU caching service. Specify API, in-process concurrency control (e.g., fine-grained locks or lock-free techniques), eviction correctness under concurrent access, TTL/size limits, metrics, and observability. Discuss how you would scale it across multiple instances using Redis or sharding/consistent hashing, handle hot keys, replication/failover, backpressure, and data consistency. B) A compute-heavy find-transfer-combinations API that may receive spikes of requests. Describe stateless vs stateful design, avoiding shared mutable state in workers, per-request memory management, caching/precomputation strategies, request de-duplication, rate limiting, timeouts, idempotency, and horizontal scaling. Provide capacity estimates and a testing plan.
Quick Answer: This question evaluates understanding of scalable system design, concurrent in-process caching and eviction semantics, distributed cache topologies, API design for compute-heavy services, observability and metrics, consistency models, rate limiting, autoscaling, and validation/testing strategies.