How do you test two variants vs control?
Company: Gusto
Role: Data Scientist
Category: Analytics & Experimentation
Difficulty: medium
Interview Round: Technical Screen
You ran an A/B/n experiment with 1 control and 2 treatment variants. The primary metric is **conversion rate** (each user either converts or not within the experiment window). Users are independently assigned to groups.
You are given the following aggregated results:
| group | users (n) | conversions (x) |
|---|---:|---:|
| control | 50,000 | 5,000 |
| variant_a | 50,000 | 5,250 |
| variant_b | 50,000 | 5,400 |
Assumptions:
- Two-sided tests unless specified.
- Significance level b1 = 0.05.
- Metric is a binomial proportion; large-sample normal approximations are acceptable.
Tasks:
1) In **Python**, compute the **p-value** for each variant vs control using an appropriate statistical test for proportions.
2) Because there are two variants, address **multiple comparisons** (e.g., Bonferroni or FDR). Report adjusted significance conclusions.
3) Provide 95% confidence intervals for the uplift (absolute and/or relative) for each variant vs control.
4) Make a **ship / no-ship** recommendation. Explain what additional checks you would do before shipping (e.g., guardrail metrics, segmentation concerns, SRM, novelty effects), and how you would communicate uncertainty to stakeholders.
(You may use `statsmodels`/`scipy` or implement the formulas directly.)
Quick Answer: This question evaluates proficiency in A/B/n experimentation, hypothesis testing for binomial proportions, multiple-comparison correction methods, confidence interval estimation for conversion uplift, and experiment-related decision-making in the Analytics & Experimentation domain at an applied/intermediate statistical-analysis level.