PracHub
QuestionsPremiumLearningGuidesCheatsheetNEWCoaches
|Home/Analytics & Experimentation/Pinterest

Recover causal effect without a control group

Last updated: Mar 29, 2026

Quick Overview

This question evaluates a candidate's competence in causal inference, observational estimation, and experiment analytics—specifically identification strategies, causal assumptions, validation/placebo tests, diagnostics, and uncertainty quantification after an accidental full-rollout.

  • hard
  • Pinterest
  • Analytics & Experimentation
  • Data Scientist

Recover causal effect without a control group

Company: Pinterest

Role: Data Scientist

Category: Analytics & Experimentation

Difficulty: hard

Interview Round: Onsite

An intern launched an A/B experiment but forgot to allocate a control; all eligible users received Treatment for 5 days (T period). You have 4 weeks of pre-period data (P period) with the same eligibility rules and stable product. Primary metric: 1-day retention; guardrails: crashes/session, latency p95, purchase conversion. Tasks: 1) Propose and compare at least two identification strategies to estimate the treatment effect using observational methods: (a) Pre-post with CUPED; (b) Synthetic control via matching/propensity-score weighting (PSW) against ineligible-but-similar users or delayed-exposure users; (c) Difference-in-differences using a holdout geography. For (b), specify covariates, overlap checks, and diagnostics (SMD, eCDF, weight trimming). 2) State the assumptions required for each method (e.g., parallel trends, no interference, ignorability) and design falsification/placebo tests to probe them. 3) Explain how you would compute ATT vs ATE, handle calendar effects and novelty/seasonality, and quantify uncertainty (cluster-robust SEs or bootstrap under weighting). 4) List pitfalls of the original A/B setup that led to this failure and propose a prevention plan (exposure checks, invariant metrics, automated power and allocation validation).

Quick Answer: This question evaluates a candidate's competence in causal inference, observational estimation, and experiment analytics—specifically identification strategies, causal assumptions, validation/placebo tests, diagnostics, and uncertainty quantification after an accidental full-rollout.

Related Interview Questions

  • How would you evaluate a carousel launch? - Pinterest (medium)
  • How to evaluate a new Carousel feature - Pinterest (easy)
  • Evaluate Fresh Content and Video Experiments - Pinterest (medium)
  • Design and Evaluate a Home Carousel - Pinterest (medium)
  • Evaluate Carousel and Billboard Lift - Pinterest (medium)
Pinterest logo
Pinterest
Oct 13, 2025, 9:49 PM
Data Scientist
Onsite
Analytics & Experimentation
3
0

Post-hoc Causal Estimation After a Failed A/B Rollout

Context

An intern accidentally shipped a feature to 100% of eligible users for 5 consecutive days (the T period). There is no concurrent control. You have 4 full weeks of stable pre-period data (the P period) collected under identical eligibility rules and product configuration.

  • Primary metric: 1-day retention (D+1 retention).
  • Guardrails: crashes per session, latency p95, purchase conversion.

Your goal is to recover the causal treatment effect using observational methods and to describe validation, assumptions, uncertainty, and prevention.

Tasks

  1. Propose and compare at least two identification strategies to estimate the treatment effect using observational methods:
    • (a) Pre–post with CUPED.
    • (b) Synthetic control via matching/propensity-score weighting (PSW) against ineligible-but-similar users or delayed-exposure users. For (b), specify covariates, overlap checks, and diagnostics (SMD, eCDF, weight trimming).
    • (c) Difference-in-differences (DiD) using a holdout geography.
  2. For each method, state the assumptions (e.g., parallel trends, no interference, ignorability) and design falsification/placebo tests to probe them.
  3. Explain how to compute ATT vs ATE, handle calendar effects and novelty/seasonality, and quantify uncertainty (cluster-robust SEs or bootstrap under weighting).
  4. List pitfalls in the original A/B setup that led to this failure and propose a prevention plan (exposure checks, invariant metrics, automated power and allocation validation).

Solution

Show

Comments (0)

Sign in to leave a comment

Loading comments...

Browse More Questions

More Analytics & Experimentation•More Pinterest•More Data Scientist•Pinterest Data Scientist•Pinterest Analytics & Experimentation•Data Scientist Analytics & Experimentation
PracHub

Master your tech interviews with 7,500+ real questions from top companies.

Product

  • Questions
  • Learning Tracks
  • Interview Guides
  • Resources
  • Premium
  • For Universities
  • Student Access

Browse

  • By Company
  • By Role
  • By Category
  • Topic Hubs
  • SQL Questions
  • Compare Platforms
  • Discord Community

Support

  • support@prachub.com
  • (916) 541-4762

Legal

  • Privacy Policy
  • Terms of Service
  • About Us

© 2026 PracHub. All rights reserved.