This question evaluates system-design and analytics competencies for online experimentation platforms in the Analytics & Experimentation domain, focusing on deterministic sticky assignment, configuration storage, eligibility targeting, traffic allocation, mutual exclusion, overrides, exposure logging, metric tagging, privacy/compliance, latency and availability targets, experiment lifecycle, and high-level architecture. It is commonly asked because it assesses both conceptual understanding and practical system-design application of distributed-system trade-offs, scalability, data integrity, operational reliability, and privacy constraints, and gauges the ability to define high-level APIs, architecture, and justify key trade-offs.
You are asked to design a highly available online experiment assignment service. The service exposes an API that takes (userId, featureName) and returns a variant identifier (e.g., control or treatment). Assume the service will be used by web and mobile applications globally and must support both logged-in and logged-out users.
Clarify, specify, and propose designs for the following:
Provide a high-level architecture, key APIs, and justify key trade-offs. Where relevant, include small examples and precise definitions (e.g., hashing and bucket math, namespace/layering).
Login required