Evaluate Facebook Dating launch and validate success
Company: Meta
Role: Data Scientist
Category: Analytics & Experimentation
Difficulty: hard
Interview Round: Technical Screen
You are assessing whether to scale Facebook Dating from a limited market pilot to a broader rollout. Build a rigorous validation plan:
- Hypotheses and success metrics: Define the primary objective (e.g., successful matches per active seeker), guardrails (core app retention, message abuse reports, privacy complaints), and counter‑metrics (cannibalization of core social engagement). Specify metric definitions and acceptable deltas.
- Experimental design: Propose a market‑level rollout test (geo‑ramp) versus user‑level randomization. Justify the unit, spillover risk, and how you’ll mitigate cross‑market contamination (e.g., geo fencing, intent‑to‑treat analysis).
- Power and duration: Outline a back‑of‑the‑envelope sample size and duration plan under realistic baseline rates and minimum detectable effects; describe how you’ll monitor for novelty effects and winner’s curse.
- Expected user trends: Describe the expected shapes for new‑user activation, 1/7/28‑day retention, match‑to‑message conversion, and seasonality; what anomalies would worry you (e.g., spike in sign‑ups without matches, gender‑imbalance driven drop‑offs)?
- Validation without full RCT: If an RCT is infeasible, propose a quasi‑experimental approach (e.g., synthetic controls or staggered DiD with pre‑trend checks). Detail diagnostics you require before trusting the estimate.
- Scale/readiness criteria: Define the exact quantitative and qualitative gates to expand, pause, or roll back; include privacy/SOC2 readiness, abuse tooling, and on‑call load considerations.
Quick Answer: This question evaluates a data scientist's competency in product analytics, experimental design, causal inference, and launch validation by requiring definition of primary and guardrail metrics, test unit and spillover considerations, power/duration planning, behavioral signal expectations, quasi-experimental alternatives, and scale/readiness criteria. Commonly asked in Analytics & Experimentation interviews because organizations need assurance a pilot can be scaled safely and reliably, it tests practical application of experimental methods alongside conceptual understanding of statistical power, spillover risk, diagnostic checks, and operational concerns such as privacy and abuse monitoring.