Design and code (in Python) a streaming algorithm that ingests an unbounded event stream of tuples (user_id, event_time, event_type) and maintains, for each user, at most M events that are a uniform random sample without replacement of all events seen so far for that user. Requirements: (1) Use reservoir sampling so each event has equal inclusion probability; prove correctness. (2) Achieve amortized O(1) update per event and O(U*M) memory, where U is number of active users. (3) Support concurrent shards with deterministic merging given a random seed. (4) Provide unit tests that verify marginal inclusion probabilities and that increasing M reduces variance of feature estimates derived from the reservoir. (5) As an extension, maintain a real-time top-K users by event count using a size-K min-heap that supports increment/decrement when late events are revoked; state time and space complexities.