System Design: Real-time Top-K from a Large/Streaming Dataset
Context
You receive a continuous, high-volume stream of events, each referencing an item (e.g., item_id). The system must continuously identify the Top K most frequent items and serve low-latency queries. Assume:
-
Data volume is large (potentially millions of events per second), item cardinality can be high, and K is small (e.g., 10–1,000).
-
Queries may request Top K for different time windows (e.g., last 1 minute, 1 hour, 1 day) and potentially by a grouping key (e.g., per region, per tenant).
-
Results should be near-real-time with bounded staleness.
Requirements
Design a system that:
-
Ingests a large/streaming dataset and continuously identifies the Top K elements.
-
Chooses suitable data structures for exact and approximate solutions.
-
Scales horizontally across shards/partitions.
-
Handles updates: insertions, deletions/expirations (e.g., sliding windows), out-of-order/late events.
-
Supports high-throughput, low-latency queries (read path), including caching/materialization.
-
Discusses consistency, fault tolerance, and operational considerations.
Provide the design, trade-offs, and key algorithms/data structures. Include complexity and accuracy considerations.