PracHub
QuestionsPremiumLearningGuidesCheatsheetNEWCoaches
|Home/Analytics & Experimentation/Microsoft

How would you estimate impact without A/B?

Last updated: Mar 29, 2026

Quick Overview

This question evaluates a data scientist's competency in causal inference, experimental design, metric definition and diagnostics, with specific emphasis on confounding, selection bias, interference, and Simpson's paradox.

  • medium
  • Microsoft
  • Analytics & Experimentation
  • Data Scientist

How would you estimate impact without A/B?

Company: Microsoft

Role: Data Scientist

Category: Analytics & Experimentation

Difficulty: medium

Interview Round: Technical Screen

A product team at a large software company launches a new feature intended to improve user activation and downstream retention. You are asked to evaluate whether the feature is successful. 1. Define an appropriate primary metric, secondary metrics, and guardrail metrics. Be explicit about tradeoffs between short-term engagement metrics and longer-term business metrics. 2. Explain how you would design a standard randomized A/B test if randomization were possible, including the unit of randomization, success criteria, power or MDE considerations, and common validity checks. 3. Now assume a true randomized experiment is not feasible because the feature has already been partially rolled out, or legal or operational constraints prevent random assignment. Describe several counterfactual estimation approaches you could use instead, such as difference-in-differences, matching or propensity-score methods, synthetic control, regression discontinuity, or instrumental variables. For each method, explain the key assumptions and major sources of bias. 4. Suppose the core product metric suddenly drops on one specific day after launch. Describe how you would determine whether this is a real causal product effect versus a logging issue, data pipeline problem, traffic mix shift, seasonality, or an external event. Your answer should discuss confounding, selection bias, interference, Simpson's paradox, and how you would communicate uncertainty to stakeholders.

Quick Answer: This question evaluates a data scientist's competency in causal inference, experimental design, metric definition and diagnostics, with specific emphasis on confounding, selection bias, interference, and Simpson's paradox.

Related Interview Questions

  • Design Testing Without A/B Experiments - Microsoft (medium)
  • Design evaluation when A/B test is impossible - Microsoft (easy)
  • Design and analyze email deliverability experiment - Microsoft (hard)
  • Identify research to improve business - Microsoft (medium)
Microsoft logo
Microsoft
Jan 16, 2026, 12:00 AM
Data Scientist
Technical Screen
Analytics & Experimentation
3
0
Loading...

A product team at a large software company launches a new feature intended to improve user activation and downstream retention. You are asked to evaluate whether the feature is successful.

  1. Define an appropriate primary metric, secondary metrics, and guardrail metrics. Be explicit about tradeoffs between short-term engagement metrics and longer-term business metrics.
  2. Explain how you would design a standard randomized A/B test if randomization were possible, including the unit of randomization, success criteria, power or MDE considerations, and common validity checks.
  3. Now assume a true randomized experiment is not feasible because the feature has already been partially rolled out, or legal or operational constraints prevent random assignment. Describe several counterfactual estimation approaches you could use instead, such as difference-in-differences, matching or propensity-score methods, synthetic control, regression discontinuity, or instrumental variables. For each method, explain the key assumptions and major sources of bias.
  4. Suppose the core product metric suddenly drops on one specific day after launch. Describe how you would determine whether this is a real causal product effect versus a logging issue, data pipeline problem, traffic mix shift, seasonality, or an external event.

Your answer should discuss confounding, selection bias, interference, Simpson's paradox, and how you would communicate uncertainty to stakeholders.

Solution

Show

Comments (0)

Sign in to leave a comment

Loading comments...

Browse More Questions

More Analytics & Experimentation•More Microsoft•More Data Scientist•Microsoft Data Scientist•Microsoft Analytics & Experimentation•Data Scientist Analytics & Experimentation
PracHub

Master your tech interviews with 7,500+ real questions from top companies.

Product

  • Questions
  • Learning Tracks
  • Interview Guides
  • Resources
  • Premium
  • For Universities
  • Student Access

Browse

  • By Company
  • By Role
  • By Category
  • Topic Hubs
  • SQL Questions
  • Compare Platforms
  • Discord Community

Support

  • support@prachub.com
  • (916) 541-4762

Legal

  • Privacy Policy
  • Terms of Service
  • About Us

© 2026 PracHub. All rights reserved.