PracHub
QuestionsPremiumLearningGuidesCheatsheetNEWCareers
|Home/Analytics & Experimentation/Upstart

Estimate impact without experiments and pick variant

Last updated: Mar 29, 2026

Quick Overview

This question evaluates causal inference and experimental-analysis competencies in Analytics & Experimentation and Data Science, covering observational estimands, causal identification assumptions and biases, uncertainty quantification for A/B/C tests, multiple-comparisons reasoning, and post-launch forecasting and monitoring.

  • easy
  • Upstart
  • Analytics & Experimentation
  • Data Scientist

Estimate impact without experiments and pick variant

Company: Upstart

Role: Data Scientist

Category: Analytics & Experimentation

Difficulty: easy

Interview Round: Technical Screen

## Part A — Measuring impact when you cannot run an experiment You are a Staff Data Scientist working on a product change (feature/policy/model update). Stakeholders want to **measure causal impact** (incremental lift) of the change, but you **cannot launch a randomized experiment** (e.g., legal constraints, all users must receive the change, platform limitations, or risk). **Task:** 1. Propose an end-to-end approach to estimate the **causal impact** of the change using observational data (you may use an ML-based counterfactual if appropriate). 2. Clearly state: - The **estimand** (e.g., ATE, ATT, incremental purchases per user/day). - Key **assumptions** required for causal identification. - Major **biases/failure modes** (confounding, selection bias, interference, data drift, novelty effects, etc.). - How you would **validate** the approach (placebo tests, negative controls, sensitivity analysis, backtesting). 3. Explain how you would reason about **short-term vs. long-term impact**, and what additional data or modeling you would need. --- ## Part B — 3-variant experiment and forecasting post-launch conversion You ran an online experiment with **three variants** (A/B/C). The goal is to maximize **CTP (purchase rate)**, defined as: \[ \mathrm{CTP} = \frac{\#\text{purchases}}{\#\text{visits}} \] Observed results (assume visits are independent Bernoulli trials; one purchase at most per visit): - Variant A: 150 visits, 43 purchases - Variant B: 200 visits, 48 purchases - Variant C: 100 visits, 15 purchases **Questions:** 1. Which variant is “winning”? - Provide point estimates of CTP. - Quantify uncertainty (e.g., confidence/credible intervals). - Address **multiple comparisons** / decision criteria if needed. 2. Suppose you launch the chosen variant to 100% traffic. How would you **predict the future CTP** after launch? - Describe a statistical approach to generate a forecast (and interval). - List key factors that may cause post-launch CTP to differ from experiment CTP (traffic mix shift, seasonality, ramp-up effects, novelty, instrumentation changes, etc.). - Mention how you would monitor/validate the forecast after launch (guardrails, alerting, recalibration).

Quick Answer: This question evaluates causal inference and experimental-analysis competencies in Analytics & Experimentation and Data Science, covering observational estimands, causal identification assumptions and biases, uncertainty quantification for A/B/C tests, multiple-comparisons reasoning, and post-launch forecasting and monitoring.

Related Interview Questions

  • Evaluate channels and allocate budget - Upstart (hard)
  • Decide to ship a signup experiment - Upstart (hard)
  • Analyze aggregator lender page flows - Upstart (hard)
  • Formulate hypotheses and metrics for video-pin ramp - Upstart (hard)
  • Design Experiment to Measure Airport Surge-Pricing Impact - Upstart (hard)
Upstart logo
Upstart
Feb 19, 2026, 6:13 AM
Data Scientist
Technical Screen
Analytics & Experimentation
14
0

Part A — Measuring impact when you cannot run an experiment

You are a Staff Data Scientist working on a product change (feature/policy/model update). Stakeholders want to measure causal impact (incremental lift) of the change, but you cannot launch a randomized experiment (e.g., legal constraints, all users must receive the change, platform limitations, or risk).

Task:

  1. Propose an end-to-end approach to estimate the causal impact of the change using observational data (you may use an ML-based counterfactual if appropriate).
  2. Clearly state:
    • The estimand (e.g., ATE, ATT, incremental purchases per user/day).
    • Key assumptions required for causal identification.
    • Major biases/failure modes (confounding, selection bias, interference, data drift, novelty effects, etc.).
    • How you would validate the approach (placebo tests, negative controls, sensitivity analysis, backtesting).
  3. Explain how you would reason about short-term vs. long-term impact , and what additional data or modeling you would need.

Part B — 3-variant experiment and forecasting post-launch conversion

You ran an online experiment with three variants (A/B/C). The goal is to maximize CTP (purchase rate), defined as:

CTP=#purchases#visits\mathrm{CTP} = \frac{\#\text{purchases}}{\#\text{visits}}CTP=#visits#purchases​

Observed results (assume visits are independent Bernoulli trials; one purchase at most per visit):

  • Variant A: 150 visits, 43 purchases
  • Variant B: 200 visits, 48 purchases
  • Variant C: 100 visits, 15 purchases

Questions:

  1. Which variant is “winning”?
    • Provide point estimates of CTP.
    • Quantify uncertainty (e.g., confidence/credible intervals).
    • Address multiple comparisons / decision criteria if needed.
  2. Suppose you launch the chosen variant to 100% traffic. How would you predict the future CTP after launch?
    • Describe a statistical approach to generate a forecast (and interval).
    • List key factors that may cause post-launch CTP to differ from experiment CTP (traffic mix shift, seasonality, ramp-up effects, novelty, instrumentation changes, etc.).
    • Mention how you would monitor/validate the forecast after launch (guardrails, alerting, recalibration).

Solution

Show

Comments (0)

Sign in to leave a comment

Loading comments...

Browse More Questions

More Analytics & Experimentation•More Upstart•More Data Scientist•Upstart Data Scientist•Upstart Analytics & Experimentation•Data Scientist Analytics & Experimentation
PracHub

Master your tech interviews with 7,500+ real questions from top companies.

Product

  • Questions
  • Learning Tracks
  • Interview Guides
  • Resources
  • Premium
  • Careers
  • For Universities
  • Student Access

Browse

  • By Company
  • By Role
  • By Category
  • Topic Hubs
  • SQL Questions
  • Compare Platforms
  • Discord Community

Support

  • support@prachub.com
  • (916) 541-4762

Legal

  • Privacy Policy
  • Terms of Service
  • About Us

© 2026 PracHub. All rights reserved.