Evaluate account re-ranking via logs and A/B test
Company: Meta
Role: Data Scientist
Category: Analytics & Experimentation
Difficulty: easy
Interview Round: Technical Screen
A product has **users with multiple accounts**. In the UI, these accounts are shown as a list.
- **Current ranking**: accounts are sorted by **most recent visit** (most recently visited account appears at the top).
- **Proposed change**: sort accounts by **number of notifications** (accounts with the most notifications appear at the top).
You are asked:
1) **Historical/offline analysis**: Using existing logs and historical data, how would you assess whether this ranking change is likely to improve the product? Be explicit about:
- what success means (primary metric + diagnostic metrics + guardrails)
- what data you need (events/logging schema at a high level)
- key confounders/biases (e.g., selection/position bias, heavy-user effects, regression to the mean)
- what analyses you would run and how you would interpret results
2) **A/B test design**: Propose an online experiment to measure the causal impact of the new ranking. Cover:
- experiment unit and randomization level (user/account/device)
- eligibility criteria (e.g., users with 2+ accounts)
- key metrics (primary/secondary/guardrails) and how to compute them
- sample size / MDE considerations and duration
- analysis plan (e.g., CUPED, handling multiple comparisons)
- pitfalls (spillover, novelty effects, instrumentation, SRM) and how to mitigate them
Quick Answer: This question evaluates a data scientist's skills in experiment design, causal inference, logging and instrumentation, metric definition, and bias identification when assessing a change to account ranking for multi-account users.