PracHub
QuestionsPremiumLearningGuidesCheatsheetNEWCoaches
|Home/Analytics & Experimentation/Pinterest

Design and assess video-pin increase experiment

Last updated: Mar 29, 2026

Quick Overview

This question evaluates expertise in experimentation design and causal inference—covering unit of randomization, interference and spillovers, exposure capping and ramping, precise primary and guardrail metric definitions, power/MDE planning, A/A checks, and quasi-experimental alternatives—within the Analytics & Experimentation domain for a Data Scientist role, requiring a mix of practical application and conceptual reasoning. It is commonly asked to assess the ability to interpret treatment/control readouts, balance engagement lifts against negative guardrails and multiple-testing concerns, and identify necessary follow-up analyses and diagnostics for robust product decisions.

  • Medium
  • Pinterest
  • Analytics & Experimentation
  • Data Scientist

Design and assess video-pin increase experiment

Company: Pinterest

Role: Data Scientist

Category: Analytics & Experimentation

Difficulty: Medium

Interview Round: Technical Screen

You plan to increase the proportion of video pins surfaced in the home feed. Design a rigorous evaluation and then interpret provided results. A) Experiment design 1) Specify the unit of randomization (user-level vs. session-level) and justify considering network/content-supply interference and feed-ranking spillovers. State how to cap per-user exposure to the new mix (e.g., from 30% baseline video share to 45% target) and how to ramp. 2) Define primary success metrics with exact formulas (e.g., saves_per_user_day, clicks_per_impression, time_spent_per_user_day) and guardrails (e.g., complaint_rate = complaints/impressions, session_end_rate, creator churn, bandwidth cost per user). State win/loss directions. 3) Outline power/MDE and duration assumptions (alpha, two-sided test, allocation, variance source), and how you will handle sequential looks or peeking (e.g., group sequential or CUPED). Include an A/A check and novelty/fatigue plan (minimum run and long-term holdout). 4) If an RCT is infeasible, propose a credible quasi-experiment (e.g., staggered rollout DiD with user fixed effects + inverse-propensity weighting, or synthetic control). List identifying assumptions, diagnostics, and sensitivity checks. B) Interpret this 14-day readout (N ≈ 2.0M users; user-level randomization; robust SEs) metric | control_mean | treatment_mean | lift_% | p_value CTR | 3.00% | 3.60% | +20.0 | 0.010 avg_session_sec| 310 | 340 | +9.7 | 0.040 7d_retention | 28.0% | 27.0% | -3.6 | 0.070 complaint_rate | 0.50% | 0.65% | +30.0 | 0.030 - Provide a clear recommendation to the PM: roll out, iterate, or stop? Justify using the metrics above, multiple-testing/guardrail considerations, and potential mitigations (e.g., cap video share for sensitive cohorts, rank-quality filters). Also state what additional data or follow-up analysis you would run before a full rollout.

Quick Answer: This question evaluates expertise in experimentation design and causal inference—covering unit of randomization, interference and spillovers, exposure capping and ramping, precise primary and guardrail metric definitions, power/MDE planning, A/A checks, and quasi-experimental alternatives—within the Analytics & Experimentation domain for a Data Scientist role, requiring a mix of practical application and conceptual reasoning. It is commonly asked to assess the ability to interpret treatment/control readouts, balance engagement lifts against negative guardrails and multiple-testing concerns, and identify necessary follow-up analyses and diagnostics for robust product decisions.

Related Interview Questions

  • How would you evaluate a carousel launch? - Pinterest (medium)
  • How to evaluate a new Carousel feature - Pinterest (easy)
  • Evaluate Fresh Content and Video Experiments - Pinterest (medium)
  • Design and Evaluate a Home Carousel - Pinterest (medium)
  • Evaluate Carousel and Billboard Lift - Pinterest (medium)
Pinterest logo
Pinterest
Oct 13, 2025, 9:49 PM
Data Scientist
Technical Screen
Analytics & Experimentation
3
0

You plan to increase the proportion of video pins surfaced in the home feed. Design a rigorous evaluation and then interpret provided results. A) Experiment design

  1. Specify the unit of randomization (user-level vs. session-level) and justify considering network/content-supply interference and feed-ranking spillovers. State how to cap per-user exposure to the new mix (e.g., from 30% baseline video share to 45% target) and how to ramp.
  2. Define primary success metrics with exact formulas (e.g., saves_per_user_day, clicks_per_impression, time_spent_per_user_day) and guardrails (e.g., complaint_rate = complaints/impressions, session_end_rate, creator churn, bandwidth cost per user). State win/loss directions.
  3. Outline power/MDE and duration assumptions (alpha, two-sided test, allocation, variance source), and how you will handle sequential looks or peeking (e.g., group sequential or CUPED). Include an A/A check and novelty/fatigue plan (minimum run and long-term holdout).
  4. If an RCT is infeasible, propose a credible quasi-experiment (e.g., staggered rollout DiD with user fixed effects + inverse-propensity weighting, or synthetic control). List identifying assumptions, diagnostics, and sensitivity checks.

B) Interpret this 14-day readout (N ≈ 2.0M users; user-level randomization; robust SEs) metric | control_mean | treatment_mean | lift_% | p_value CTR | 3.00% | 3.60% | +20.0 | 0.010 avg_session_sec| 310 | 340 | +9.7 | 0.040 7d_retention | 28.0% | 27.0% | -3.6 | 0.070 complaint_rate | 0.50% | 0.65% | +30.0 | 0.030

  • Provide a clear recommendation to the PM: roll out, iterate, or stop? Justify using the metrics above, multiple-testing/guardrail considerations, and potential mitigations (e.g., cap video share for sensitive cohorts, rank-quality filters). Also state what additional data or follow-up analysis you would run before a full rollout.

Comments (0)

Sign in to leave a comment

Loading comments...

Browse More Questions

More Analytics & Experimentation•More Pinterest•More Data Scientist•Pinterest Data Scientist•Pinterest Analytics & Experimentation•Data Scientist Analytics & Experimentation
PracHub

Master your tech interviews with 7,500+ real questions from top companies.

Product

  • Questions
  • Learning Tracks
  • Interview Guides
  • Resources
  • Premium
  • For Universities
  • Student Access

Browse

  • By Company
  • By Role
  • By Category
  • Topic Hubs
  • SQL Questions
  • Compare Platforms
  • Discord Community

Support

  • support@prachub.com
  • (916) 541-4762

Legal

  • Privacy Policy
  • Terms of Service
  • About Us

© 2026 PracHub. All rights reserved.