PracHub
QuestionsPremiumLearningGuidesCheatsheetNEWCoaches
|Home/Analytics & Experimentation/Pinterest

Design metrics and experiment for Shopping launch

Last updated: Mar 29, 2026

Quick Overview

This question evaluates a data scientist’s competency in experimental design and product analytics, including metric selection and precise definitions, guardrail and funnel diagnostics, spillover and novelty handling, power/MDE estimation, and multi-objective decision framing.

  • hard
  • Pinterest
  • Analytics & Experimentation
  • Data Scientist

Design metrics and experiment for Shopping launch

Company: Pinterest

Role: Data Scientist

Category: Analytics & Experimentation

Difficulty: hard

Interview Round: Technical Screen

You are launching a new Shopping module embedded in the Pins feed. Design an experiment and metric plan that: (1) Chooses one primary success metric and 3–5 guardrail metrics. Define each precisely (numerator/denominator, unit-of-analysis, aggregation window) and include at least one intermediate/funnel metric (e.g., num_clicks_of_new_feature, stay_time_in_shopping). Discuss pros/cons of using DAU and user time spent as primaries vs. alternatives (e.g., Shopping CTR, Add-to-Cart Rate, GMV/DAU), and specify acceptable directions/magnitudes for guardrails. (2) Handles spillover/interference (repins/shares may expose control users) and learning/novelty effects. Propose and justify a concrete design (pick one): user-level randomization with exposure-logging and adjacency tests; cluster/geo randomization; switchback (time-based) or two-stage saturation design. Detail: randomization unit, eligibility/exposure rules, cooldown, novelty burn-in, and how you would detect/quantify spillover (e.g., graph distance, household/geo adjacency) and learning (e.g., time-on-feature slope). (3) Specifies power and MDE: assumptions on baseline rates, variance, intra-cluster correlation if clustered, horizon length, and how you’ll handle seasonality and peaky traffic (weekly cycles). Include an AA test and CUPED (or covariate adjustment) plan. (4) Defines a decision framework when other metrics drop: Suppose after a 21-day test you observe +2.3% lift in Shopping CTR, +1.1% in GMV/user, but −0.6% in overall time spent and −0.2% in DAU. Describe your net weighted lift (or multi-objective) rubric, guardrail thresholds, sensitivity to long-term effects, and what you would recommend to the PM. Include what additional diagnostics you’d run before a rollout (e.g., user/creator segment heterogeneity, cannibalization of ad revenue, repeat usage vs. one-off novelty).

Quick Answer: This question evaluates a data scientist’s competency in experimental design and product analytics, including metric selection and precise definitions, guardrail and funnel diagnostics, spillover and novelty handling, power/MDE estimation, and multi-objective decision framing.

Related Interview Questions

  • How would you evaluate a carousel launch? - Pinterest (medium)
  • How to evaluate a new Carousel feature - Pinterest (easy)
  • Evaluate Fresh Content and Video Experiments - Pinterest (medium)
  • Design and Evaluate a Home Carousel - Pinterest (medium)
  • Evaluate Carousel and Billboard Lift - Pinterest (medium)
Pinterest logo
Pinterest
Oct 13, 2025, 9:49 PM
Data Scientist
Technical Screen
Analytics & Experimentation
2
0

Experiment and Metric Plan: New Shopping Module Embedded in the Pins Feed

Context

You are introducing a Shopping module directly into the Pins feed. The goal is to assess whether this module increases shopping outcomes without harming overall user and monetization health. Design a metrics plan and an experiment that:

  • Selects a clear primary success metric and 3–5 guardrail metrics (with precise definitions).
  • Includes at least one intermediate/funnel metric for diagnosis.
  • Addresses spillover/interference (e.g., repins/shares exposing control users) and learning/novelty effects with a concrete experimental design.
  • Specifies power and MDE assumptions, handles seasonality, and includes an AA and CUPED plan.
  • Provides a decision framework given mixed results.

Tasks

  1. Metrics
  • Pick one primary success metric and 3–5 guardrail metrics. For each metric, define numerator/denominator, unit of analysis, and aggregation window. Include at least one intermediate/funnel metric (e.g., clicks on the new module, time spent in Shopping).
  • Discuss pros/cons of using DAU and user time spent as primaries versus alternatives (e.g., Shopping CTR, Add-to-Cart Rate, GMV/DAU). For guardrails, specify acceptable directions and magnitudes of change.
  1. Experiment Design for Spillover and Learning Effects
  • Choose and justify one concrete design: user-level randomization with exposure-logging and adjacency tests; cluster/geo randomization; switchback (time-based); or two-stage saturation design.
  • Detail: randomization unit, eligibility/exposure rules, cooldown, novelty burn-in, and how you would detect/quantify spillover (e.g., graph distance, household/geo adjacency) and learning (e.g., time-on-feature slope).
  1. Power, MDE, and Analysis Hygiene
  • State assumptions for baseline rates/variances, intra-cluster correlation if clustered, horizon length, and how you will handle seasonality and peaky traffic (weekly cycles).
  • Include an AA test and CUPED (covariate adjustment) plan.
  1. Decision Framework with Example Results
  • After a 21-day test, suppose you observe: +2.3% lift in Shopping CTR, +1.1% in GMV/user, −0.6% in overall time spent, and −0.2% in DAU.
  • Describe a net weighted lift (or multi-objective) rubric, guardrail thresholds, sensitivity to long-term effects, and what you would recommend to the PM.
  • List additional diagnostics you would run before rollout (e.g., user/creator segment heterogeneity, ad revenue cannibalization, repeat usage vs. novelty).

Solution

Show

Comments (0)

Sign in to leave a comment

Loading comments...

Browse More Questions

More Analytics & Experimentation•More Pinterest•More Data Scientist•Pinterest Data Scientist•Pinterest Analytics & Experimentation•Data Scientist Analytics & Experimentation
PracHub

Master your tech interviews with 7,500+ real questions from top companies.

Product

  • Questions
  • Learning Tracks
  • Interview Guides
  • Resources
  • Premium
  • For Universities
  • Student Access

Browse

  • By Company
  • By Role
  • By Category
  • Topic Hubs
  • SQL Questions
  • Compare Platforms
  • Discord Community

Support

  • support@prachub.com
  • (916) 541-4762

Legal

  • Privacy Policy
  • Terms of Service
  • About Us

© 2026 PracHub. All rights reserved.