Design an A/B test for a new shop-ads algorithm
Company: Meta
Role: Data Scientist
Category: Analytics & Experimentation
Difficulty: medium
Interview Round: Technical Screen
A new ranking/promotion algorithm will change which **shop ads** are shown (and their order). You are asked: “How do we know if this new algo is good?”
Design an online experiment and analysis plan. Address the following:
1) **Randomization / experiment unit**
- Would you split by **user**, **session**, or something else (e.g., geo, device)?
- What are the tradeoffs (interference, contamination, variance, returning users, cross-session effects)?
2) **Primary and guardrail metrics**
- Propose multiple plausible success metrics (e.g., CTR, CVR, revenue, advertiser ROI, long-term retention) and explain tradeoffs.
- Include at least one guardrail for user experience and one for marketplace health.
3) **Power / sample size**
- Describe how you would do a power analysis (what baseline rates you need, what MDE means, how you handle skewed revenue).
- Mention what you would do if revenue is heavy-tailed (e.g., winsorization, log transform, CUPED).
4) **Conflicting metric scenario**
- Suppose **CTR increases** but **revenue decreases**. List at least 4 plausible causes (product and statistical), and how you would debug and decide.
5) **Common experiment pitfalls**
- List common issues (SRM, bots/fraud, novelty effects, duration, multiple testing), and how you would monitor/mitigate them.
Quick Answer: Evaluates experimental-design and data-analysis skills—specifically randomization and unit selection, metric definition and guardrails, power and sample-size reasoning, diagnosis of conflicting signals, and common experimentation pitfalls—in the Analytics & Experimentation category for a Data Scientist position.