A/B Testing Interview Questions
A/B testing questions are central to data science and product analytics interviews at companies like Meta, Google, Netflix, and Airbnb.
Expect questions on experiment design, randomization units, sample size calculation, multiple comparisons, and metric selection.
Interviewers evaluate your statistical rigor, practical judgment, and ability to communicate experiment results.
Common A/B testing interview patterns
- Designing an experiment for a product change
- Calculating sample size and experiment duration
- Choosing between one-sided and two-sided tests
- Handling multiple comparisons and peeking
- Interpreting results with novelty or primacy effects
- Network effects and interference between test groups
A/B testing interview questions
Diagnose and experiment to reduce late deliveries
Determine Key Metrics for Spend-Tracker Launch Decision
Diagnose Job Application Decline: Funnel Analysis and Segmentation
Determine Demand for WhatsApp Group Video-Calls
Design Experiment to Measure Airport Surge-Pricing Impact
Design A/B Tests for Banner Ad and Group-Story Feature
Design Experiments for Email Campaign & Messaging Update
Analyze Data to Boost Group Post Comment Rates
Estimate impact without experiments and pick variant
Design and Analyze A/B Test for Recommendation Widget
Measure Harmful Content Impact with Key Metrics
Identify Causes and Solutions for Fashion Profit Decline
Analyze Algorithm's Impact on Diverse Demographics and Validate Causes
Evaluate Factors Before Renewing TV-Series Contracts
Evaluate ETA Impact on Conversion
Evaluate Campaign Lift with Predictive Analytics and Validation Strategy
Design a free-month experiment
Identify Key Profit Factors for $54 Premium Plan
Explain and validate A/B test assumptions
Common mistakes in A/B testing interviews
- Not specifying the randomization unit (user vs session vs page)
- Peeking at results before reaching the required sample size
- Ignoring practical significance when statistical significance is achieved
- Not considering guardrail metrics
- Failing to account for novelty effects in short experiments
How A/B testing questions are evaluated
Structure your experiment design: hypothesis, metrics, unit, sample size, duration.
Discuss what could go wrong and how you would detect it.
Show ability to make a recommendation even when results are ambiguous.
Related analytics concepts
A/B Testing Interview FAQs
How do you determine the sample size for an A/B test?
Use a power analysis with inputs: baseline metric, minimum detectable effect (MDE), significance level (alpha, usually 0.05), and power (usually 0.80). Larger effects need fewer samples. For small MDE on rare events, you may need millions of users.
What is the difference between statistical and practical significance?
Statistical significance means the observed difference is unlikely due to chance (p-value < alpha). Practical significance means the effect is large enough to matter for the business. A statistically significant 0.01% lift may not be worth the engineering cost to ship.