Evaluate Impact of a Shipped Version on Disconnections (No A/B Holdout)
Context
A new client version was shipped system-wide with the goal of reducing disconnections. There is no explicit A/B holdout. You must design and execute a rigorous observational evaluation, quantify uncertainty, and communicate results.
Tasks
-
Design choices
-
Choose a primary identification strategy among:
-
Interrupted time series (ITS) with seasonality
-
Difference-in-differences (DiD) using any untreated regions/devices
-
Synthetic control from donor pools
-
Regression discontinuity (RD) if rollout timing is sharp and exogenous
-
Explain when you would use each method and why.
-
Identification and validation
-
State identification assumptions for the chosen method.
-
Describe pre-trend/parallel-trend checks, placebo tests, and robustness to staggered rollout.
-
Metrics and models
-
Define the primary outcome and guardrail metrics.
-
Specify an appropriate model family (e.g., binomial/Poisson/negative binomial) and exposure.
-
Specify variance estimation (e.g., cluster-robust SEs, HAC/Newey–West) and variance reduction (e.g., CUPED).
-
Power and estimation
-
Primary metric: drops per 1,000 minutes.
-
Baseline: 3.0 drops/1,000 minutes; exposure: 10,000,000 minutes per day.
-
Compute:
-
A 95% confidence interval for the daily drop rate estimate under reasonable dispersion.
-
Minimal detectable change (MDC) for a pre/post comparison with 80% power at α = 0.05, assuming realistic autocorrelation.
-
Show assumptions and formulas. Provide MDC examples for 7-, 14-, and 28-day post windows.
-
Confounding and communication
-
Explain how to adjust for confounders (network mix shifts, seasonality, user composition), including weighting or covariate adjustment.
-
Explain how you would communicate uncertainty and limitations to stakeholders.