Experimenting on a New Paywall with Likely Spillovers
Context
You are designing an experiment to evaluate a new paywall on a social/content app where users frequently share links with each other and externally. Because of sharing and algorithmic amplification, user outcomes may be affected by other users' treatment assignments (interference/spillovers), violating standard A/B test assumptions.
Task
Define success metrics and choose an experimentation strategy that handles interference. Address all parts concisely and concretely.
Requirements
(a) Interference and mitigation
-
Identify where interference is most probable (e.g., sharing links, followers/creators, households, algorithm training).
-
Propose mitigation options (e.g., cluster randomization by social network or geo, randomized saturation, exposure models).
(b) Metrics
-
Define a primary metric, a north-star metric, and guardrails (e.g., retention, complaints), including measurement windows and units of analysis.
(c) Variance reduction
-
Describe how you would use CUPED or pre-exposure covariates, and at what level (user/cluster), without inducing bias.
(d) Power and MDE for a cluster RCT
-
With ICC = 0.05 and 30 clusters total, state your assumptions and show the MDE calculation for a binary primary outcome. Show formulas and at least one numeric example.
(e) Sequential monitoring
-
Outline an alpha-spending approach, risks of peeking, and how many looks you would plan.
(f) Rollout decision
-
Propose a decision framework balancing short-term revenue lift against retention risk and potential long-term effects (network health, creator reactions), including thresholds or expected-value logic.