Scenario
A social-media homepage team is running experimentation and product-metric analyses on a personalized feed. An intern accidentally launched a treatment to a user cohort without a randomized control group. You have standard event logs (user_id, timestamps), impression/click events, pre-period behavior history, device/geo/app-version, and eligibility flags.
Tasks
-
No-control experiment: How can you still estimate treatment impact? Compare matching versus propensity-score weighting and discuss trade-offs.
-
A/B test hygiene: Review an existing randomized A/B test and list common pitfalls that could bias the results.
-
New module metrics: For a new horizontal home-feed module, what primary metrics would you track to judge success?
-
Diagnostic scenario: Post-launch, you observe homepage click-through rate (CTR) dropping in treatment while DAU and time spent stay flat. How would you investigate root causes, and which user segments would you analyze first?
Hints
-
Causal inference when randomization is absent (matching, propensity scores, DiD/ITS/synthetic control).
-
Experiment design pitfalls and guardrails.
-
Metrics hierarchy: module-level, session-level, ecosystem, and health.
-
Segmentation: device, app version, geography, tenure, power vs casual users, exposure/reach to the module.