This question evaluates product analytics and experimentation competencies—product framing, unit economics, revenue modeling, computation-ready metric design, statistical power/sample-size calculation, A/B test setup, and cost-benefit/breakeven analysis—for a Data Scientist role in the Analytics & Experimentation domain, and is commonly asked to assess the ability to translate product hypotheses into measurable business outcomes and defend decisions under stakeholder scrutiny. It tests both conceptual understanding of segmentation, metric validity, and bias considerations and practical application in defining precise numerators/denominators, computing minimum sample sizes and test duration, and producing actionable ROI models.

Pick a consumer digital app you love. Assume the interviewer knows nothing about it. 1) Explain the product, core jobs-to-be-done, target audience segments, and top three competitors, including the app’s differentiators. 2) Enumerate all current revenue streams and unit economics; propose one new revenue stream with a six-month ROI model. 3) Define three precise, computation-ready success metrics for the new stream (each with numerator, denominator, event/window definitions) and two guardrail metrics with thresholds. 4) List six concrete product improvements for the app; prioritize one to ship first and justify the choice. 5) Experiment design: run an A/B test for the prioritized improvement—state unit of randomization, eligibility/exclusions, assignment, and bias mitigations. Compute the minimum sample size and test duration using these inputs: daily active users = 500,000; baseline conversion-to-purchase per user-day = 15%; expected absolute lift = +2.0 percentage points; power = 80%; two-sided alpha = 5%; 1:1 split. State assumptions and show the formula you’d use. 6) If finance challenges that the improvement is too costly, present your cost-benefit model, the breakeven point, and the decision you’d make if interim results underperform the MDE after two weeks.