Design and analyze a banner A/B test
Company: Snapchat
Role: Data Scientist
Category: Analytics & Experimentation
Difficulty: hard
Interview Round: Onsite
You are deciding whether to add a home-page banner. Design and analyze the A/B test end to end: 1) Randomization unit and exposure (user-level vs session-level); address cross-session consistency and interference. 2) Define primary and guardrail metrics, including CTR, dwell time after click, retention, and revenue per session. Precisely define accidental clicks and how to exclude or reweight them (e.g., dwell < 500 ms or immediate back within 2 s). 3) Powering: with baseline CTR = 1.5% and expected relative lift = 10%, compute the per-arm sample size for 90% power and α = 0.05 (two-sided); show formulas and assumptions (pooled variance, continuity correction optional). 4) Analysis: handle multiple banner placements (multiple comparisons), position bias, and novelty effects; specify CUPED or pre-period covariate adjustment. 5) Diagnostic checks: exposure logging, ratio checks, bot filtering, and sequential monitoring with proper alpha spending. 6) Decision rule: state explicit launch criteria and fallback if accidental-click rates spike.
Quick Answer: This question evaluates a candidate's competence in designing and analyzing A/B experiments, covering randomization and exposure decisions, precise metric definitions and guardrails, sample size and power calculations, analysis plans for multiple comparisons and covariate adjustment, and diagnostic validation within the Analytics & Experimentation domain. It is commonly asked because it tests both conceptual understanding and practical application of experimentation methodology—assessing the ability to manage measurement issues, statistical assumptions, and explicit decision rules that determine whether product changes are supported by the data.