Design and analyze a free-trial A/B test
Company: OpenAI
Role: Data Scientist
Category: Analytics & Experimentation
Difficulty: hard
Interview Round: Technical Screen
You must evaluate whether offering a 1‑month free trial increases paid subscription sign‑ups. Design an end‑to‑end A/B test and detail: (1) Eligibility and randomization: who is included/excluded (e.g., prior payers, grace‑period users), unit of randomization, how to prevent reassignment and cross‑device contamination. (2) Primary outcome and horizon: define a single launch‑gating metric that captures true paid conversion given the 30‑day trial delay; justify an observation window (e.g., paid start within 60 days of first exposure) and specify any guardrail metrics (refunds, chargebacks, engagement, infra cost). (3) ITT vs. triggered analyses: describe both and when each should drive the decision; handle users who never see the offer or churn before trial end. (4) Sample size: compute the per‑arm n for baseline paid conversion 4.0%, minimum detectable effect +0.8 pp (absolute), two‑sided α=0.05, power=0.80; show the formula and result. (5) Bias controls: handle seasonality, novelty, geographic heterogeneity, and pre‑existing conversion propensity (e.g., CUPED with a pre‑exposure covariate). (6) Interference and fraud: detect collusion or referral abuse; protect against multiple sign‑ups. (7) Decision framework: specify exact launch criteria (statistical significance, minimum practical effect, guardrail thresholds), how you’d adjust for peeking/sequential looks, and a staged ramp plan if results are borderline. Provide the precise analysis steps and example tables/figures you would produce.
Quick Answer: This question evaluates a data scientist's competency in end-to-end A/B test design, causal inference, metric definition and observation horizon selection, ITT versus triggered analyses, sample size calculation, bias and interference controls, and statistical decision frameworks.