Diagnose CTR drop after recommendation launch
Company: Pinterest
Role: Data Scientist
Category: Analytics & Experimentation
Difficulty: hard
Interview Round: Onsite
A new horizontal recommendations carousel was shipped on the home page. In the post-launch experiment, the Treatment group shows a decrease in home-page CTR (primary interaction rate) while app-wide DAU and total time spent remain unchanged.
Tasks:
1) Define success metrics and guardrails for this surface (primary: qualified CTR or saves per impression; guardrails: session length, bounce rate, latency, crashes, notifications sent). Include exposure- and eligibility-normalized variants.
2) Outline a stepwise diagnosis plan: instrumentation validation (event drops, duplicate fires), exposure parity, novelty/position bias, cannibalization of other entry points, surfacing frequency, content quality, scroll-depth/viewport effects, personalization cold-start, ranking changes, and infra incidents. Specify exact logs/queries you would run.
3) Propose at least 8 segmentation cuts to localize the effect (e.g., new vs returning, geo, device, app version, network quality, time-of-day, content domain affinity, session depth, notification-referred vs organic, paid vs organic users).
4) Recommend next steps: targeted fixes or follow-up experiments (e.g., cap frequency, change default slot, diversify content, boost for cold-start), and how to judge rollback vs iterate decisions with pre-registered thresholds.
Quick Answer: This question evaluates a data scientist's competency in experiment diagnosis, metric and guardrail definition, instrumentation validation, segmentation analysis, and causal reasoning for recommender-system impacts.