A/B Testing Interview Questions
A/B testing questions are central to data science and product analytics interviews at companies like Meta, Google, Netflix, and Airbnb.
Expect questions on experiment design, randomization units, sample size calculation, multiple comparisons, and metric selection.
Interviewers evaluate your statistical rigor, practical judgment, and ability to communicate experiment results.
Common A/B testing interview patterns
- Designing an experiment for a product change
- Calculating sample size and experiment duration
- Choosing between one-sided and two-sided tests
- Handling multiple comparisons and peeking
- Interpreting results with novelty or primacy effects
- Network effects and interference between test groups
A/B testing interview questions
Diagnose Google Meet Disconnections and Assess Business Impact
Design Experiments to Evaluate Courier Initiatives Effectively
Estimate ATE of personalization on streaming
Calculate Profitability and Evaluate Partnership for Credit Card Portfolio
Diagnose Decline in First Day Funding Rate
Define Success Metrics for Circle Feature Evaluation
How to estimate feature impact on usage time
Evaluate Metrics for Restaurant-Feature Impact and Engagement Trade-offs
Diagnose YouTube Usage Decline: Key Metrics and Segmentation
Design metrics and A/B test for maps and ETA
Evaluate Messenger's P2P Payments Feature for Business Viability
Investigate Sudden Metric Changes and Design A/B Test
Explain App Growth Strategy and Key Performance Metrics
Evaluate Top-Dasher Program's Benefits and Challenges
How to measure harmful-content severity and run experiments
Determine Sample Size for Promotion Campaign A/B Test
Evaluating the Facebook ‘Memory’ feature
Evaluate Rider-Incentive Program Impact with Key Metrics
Define Metrics and Account for Network and Novelty Effects
Common mistakes in A/B testing interviews
- Not specifying the randomization unit (user vs session vs page)
- Peeking at results before reaching the required sample size
- Ignoring practical significance when statistical significance is achieved
- Not considering guardrail metrics
- Failing to account for novelty effects in short experiments
How A/B testing questions are evaluated
Structure your experiment design: hypothesis, metrics, unit, sample size, duration.
Discuss what could go wrong and how you would detect it.
Show ability to make a recommendation even when results are ambiguous.
Related analytics concepts
A/B Testing Interview FAQs
How do you determine the sample size for an A/B test?
Use a power analysis with inputs: baseline metric, minimum detectable effect (MDE), significance level (alpha, usually 0.05), and power (usually 0.80). Larger effects need fewer samples. For small MDE on rare events, you may need millions of users.
What is the difference between statistical and practical significance?
Statistical significance means the observed difference is unlikely due to chance (p-value < alpha). Practical significance means the effect is large enough to matter for the business. A statistically significant 0.01% lift may not be worth the engineering cost to ship.