PracHub
QuestionsPremiumLearningGuidesCheatsheetNEWCoaches
|Home/Analytics & Experimentation/Pinterest

Diagnose CTR drop after recommendation launch

Last updated: Mar 29, 2026

Quick Overview

This question evaluates a data scientist's competency in experiment diagnosis, metric and guardrail definition, instrumentation validation, segmentation analysis, and causal reasoning for recommender-system impacts.

  • hard
  • Pinterest
  • Analytics & Experimentation
  • Data Scientist

Diagnose CTR drop after recommendation launch

Company: Pinterest

Role: Data Scientist

Category: Analytics & Experimentation

Difficulty: hard

Interview Round: Onsite

A new horizontal recommendations carousel was shipped on the home page. In the post-launch experiment, the Treatment group shows a decrease in home-page CTR (primary interaction rate) while app-wide DAU and total time spent remain unchanged. Tasks: 1) Define success metrics and guardrails for this surface (primary: qualified CTR or saves per impression; guardrails: session length, bounce rate, latency, crashes, notifications sent). Include exposure- and eligibility-normalized variants. 2) Outline a stepwise diagnosis plan: instrumentation validation (event drops, duplicate fires), exposure parity, novelty/position bias, cannibalization of other entry points, surfacing frequency, content quality, scroll-depth/viewport effects, personalization cold-start, ranking changes, and infra incidents. Specify exact logs/queries you would run. 3) Propose at least 8 segmentation cuts to localize the effect (e.g., new vs returning, geo, device, app version, network quality, time-of-day, content domain affinity, session depth, notification-referred vs organic, paid vs organic users). 4) Recommend next steps: targeted fixes or follow-up experiments (e.g., cap frequency, change default slot, diversify content, boost for cold-start), and how to judge rollback vs iterate decisions with pre-registered thresholds.

Quick Answer: This question evaluates a data scientist's competency in experiment diagnosis, metric and guardrail definition, instrumentation validation, segmentation analysis, and causal reasoning for recommender-system impacts.

Related Interview Questions

  • How would you evaluate a carousel launch? - Pinterest (medium)
  • How to evaluate a new Carousel feature - Pinterest (easy)
  • Evaluate Fresh Content and Video Experiments - Pinterest (medium)
  • Design and Evaluate a Home Carousel - Pinterest (medium)
  • Evaluate Carousel and Billboard Lift - Pinterest (medium)
Pinterest logo
Pinterest
Oct 13, 2025, 9:49 PM
Data Scientist
Onsite
Analytics & Experimentation
1
0

Experiment Diagnosis: Horizontal Recommendations Carousel on Home

Context

A new horizontal recommendations carousel was launched on the home page. In the post-launch A/B experiment, the Treatment group shows a decrease in the home-page primary interaction rate (CTR), while app-wide DAU and total time spent remain unchanged.

Assumptions for clarity:

  • Exposure = the module is rendered on the home page (request/response logged), regardless of viewport visibility.
  • Impression = a tile in the carousel becomes viewable (e.g., ≥50% visible for ≥1s) per IAB-style visibility rules.
  • Qualified click (qClick) = a click leading to a successful content load (or dwell ≥1–3s) to filter mis-taps.
  • Eligibility = a home-page view where the carousel was technically eligible to render (e.g., device supports it, user not rate-limited, no blocking experiment).

Tasks

  1. Define success metrics and guardrails for this surface. Primary: qualified CTR and saves per impression. Guardrails: session length, bounce rate, latency, crashes, notifications sent. Include exposure- and eligibility-normalized variants.
  2. Outline a stepwise diagnosis plan covering: instrumentation validation (event drops, duplicate fires), exposure parity, novelty/position bias, cannibalization of other entry points, surfacing frequency, content quality, scroll-depth/viewport effects, personalization cold-start, ranking changes, and infra incidents. Specify exact logs/queries you would run.
  3. Propose at least 8 segmentation cuts to localize the effect (e.g., new vs returning, geo, device, app version, network quality, time-of-day, content domain affinity, session depth, notification-referred vs organic, paid vs organic users).
  4. Recommend next steps: targeted fixes or follow-up experiments (e.g., cap frequency, change default slot, diversify content, boost for cold-start), and how to judge rollback vs iterate decisions with pre-registered thresholds.

Solution

Show

Comments (0)

Sign in to leave a comment

Loading comments...

Browse More Questions

More Analytics & Experimentation•More Pinterest•More Data Scientist•Pinterest Data Scientist•Pinterest Analytics & Experimentation•Data Scientist Analytics & Experimentation
PracHub

Master your tech interviews with 7,500+ real questions from top companies.

Product

  • Questions
  • Learning Tracks
  • Interview Guides
  • Resources
  • Premium
  • For Universities
  • Student Access

Browse

  • By Company
  • By Role
  • By Category
  • Topic Hubs
  • SQL Questions
  • Compare Platforms
  • Discord Community

Support

  • support@prachub.com
  • (916) 541-4762

Legal

  • Privacy Policy
  • Terms of Service
  • About Us

© 2026 PracHub. All rights reserved.