Define metrics and design experiments for notifications
Company: Meta
Role: Data Scientist
Category: Analytics & Experimentation
Difficulty: hard
Interview Round: Technical Screen
You own a new notification: “Your friend is attending a local event—join them?” Define what “high quality” means for this notification system. 1) Specify one North Star metric, three driver metrics, and three guardrail metrics with exact, unit-consistent formulas (include denominator definitions and attribution windows). Distinguish leading vs lagging indicators. 2) Before building, estimate adoption and size the opportunity using historical event data. The naive estimator averages, across past events, the percent of sign-ups who arrived with ≥1 friend. List at least four biases (e.g., event-type confounding, seasonality, selection on observables, friendship-inference error) and propose a corrected estimator (e.g., stratified weighting or matched cohorts). Show how you would compute a 95% CI for expected daily notification opens. 3) Justify engineering expense: derive a break-even formula linking expected incremental opens, sessions, and retention to dev-weeks and infra cost; state minimum detectable effect (MDE) required to proceed. 4) Pre-launch evidence plan: describe how to use concept tests/surveys and analog-notification benchmarks to predict CTR uplift, and how to adjust for response bias and population mismatch. 5) A/B design: define the unit of randomization, primary and guardrail metrics, sample sizing, and the conditions that require cluster randomization (e.g., by event or geography) due to network effects; specify how to detect interference and what to do if contamination is observed. 6) Decision framework: if an experiment shows time-on-site increases but CTR decreases, outline a metric hierarchy or utility function to make a go/no-go call, including constraints on unsubscribe/complaint rates and notification volume.
Quick Answer: This question evaluates a candidate's competency in analytics and experimentation, including metric specification (North Star, drivers, guardrails), causal inference and bias correction, power and MDE calculations, A/B and cluster-randomized experiment design, attribution and instrumentation, and decision-framework construction for notification systems. It is in the Analytics & Experimentation domain and is commonly asked because it probes both conceptual understanding and practical application of rigorous measurement, pre-launch sizing, investment justification, interference detection, and trade-off management in real-world product experiments.