Design an A/B test for promo-targeting models
Company: Uber
Role: Data Scientist
Category: Analytics & Experimentation
Difficulty: hard
Interview Round: Technical Screen
Design a controlled experiment to decide whether M1 beats M0 at assigning $5 promos. Requirements: 1) Randomize users into two arms; within each arm, rank by that arm’s model and send to the arm‑specific top‑K per day so both arms have equal expected spend and contact frequency. 2) Define the primary metric as incremental profit per eligible user (redeem uplift × expected GMV − $5) with guardrails (opt‑outs, uninstalls, support contacts). 3) Use triggered analysis (eligible, not recently contacted) and intent‑to‑treat as the estimand; log exposure, assignment, eligibility, and redemption events. 4) Compute required sample size given baseline redemption 3%, MDE +0.5 pp absolute, α=0.05 two‑sided, power 0.8; show the formula, include variance inflation for day‑level clustering, and plan for delayed outcomes. 5) Specify analysis: SRM checks, CUPED using pre‑period outcomes, and sequential monitoring with an alpha‑spending plan; report point estimates and CIs with robust (clustered) variance. 6) Address interference/saturation (e.g., users influencing each other or channel limits); if present, propose cluster randomization or switchback and explain trade‑offs. 7) Detail how you will keep budgets equal despite drift (e.g., per‑arm thresholding, rebalancing), handle throttling and suppression lists, and ensure no spillover between arms.
Quick Answer: This question evaluates a data scientist's competency in experiment design, causal inference, metric engineering, power/sample-size calculation, and operational experimentation concerns such as logging, throttling, guardrails, and budget control, and is categorized under Analytics & Experimentation.