This question evaluates a data scientist's competency in causal inference and product experimentation design, specifically handling interference/spillovers, defining success metrics and guardrails, applying variance reduction techniques, computing power and MDE for cluster RCTs, planning sequential monitoring, and framing rollout decisions.

You are designing an experiment to evaluate a new paywall on a social/content app where users frequently share links with each other and externally. Because of sharing and algorithmic amplification, user outcomes may be affected by other users' treatment assignments (interference/spillovers), violating standard A/B test assumptions.
Define success metrics and choose an experimentation strategy that handles interference. Address all parts concisely and concretely.
(a) Interference and mitigation
(b) Metrics
(c) Variance reduction
(d) Power and MDE for a cluster RCT
(e) Sequential monitoring
(f) Rollout decision
Login required