Experiment Diagnosis: Horizontal Recommendations Carousel on Home
Context
A new horizontal recommendations carousel was launched on the home page. In the post-launch A/B experiment, the Treatment group shows a decrease in the home-page primary interaction rate (CTR), while app-wide DAU and total time spent remain unchanged.
Assumptions for clarity:
-
Exposure = the module is rendered on the home page (request/response logged), regardless of viewport visibility.
-
Impression = a tile in the carousel becomes viewable (e.g., ≥50% visible for ≥1s) per IAB-style visibility rules.
-
Qualified click (qClick) = a click leading to a successful content load (or dwell ≥1–3s) to filter mis-taps.
-
Eligibility = a home-page view where the carousel was technically eligible to render (e.g., device supports it, user not rate-limited, no blocking experiment).
Tasks
-
Define success metrics and guardrails for this surface. Primary: qualified CTR and saves per impression. Guardrails: session length, bounce rate, latency, crashes, notifications sent. Include exposure- and eligibility-normalized variants.
-
Outline a stepwise diagnosis plan covering: instrumentation validation (event drops, duplicate fires), exposure parity, novelty/position bias, cannibalization of other entry points, surfacing frequency, content quality, scroll-depth/viewport effects, personalization cold-start, ranking changes, and infra incidents. Specify exact logs/queries you would run.
-
Propose at least 8 segmentation cuts to localize the effect (e.g., new vs returning, geo, device, app version, network quality, time-of-day, content domain affinity, session depth, notification-referred vs organic, paid vs organic users).
-
Recommend next steps: targeted fixes or follow-up experiments (e.g., cap frequency, change default slot, diversify content, boost for cold-start), and how to judge rollback vs iterate decisions with pre-registered thresholds.