How to test account ranking change
Company: Meta
Role: Data Scientist
Category: Analytics & Experimentation
Difficulty: medium
Interview Round: Technical Screen
A product team is evaluating a change to the account switcher in a multi-account app.
Current experience:
- When a user opens the account switcher, accounts are sorted by most recent visit.
Proposed experience:
- Sort accounts by the number of unread notifications, in descending order.
- If two accounts have the same number of unread notifications, break ties by most recent visit.
The hypothesis is that users will more quickly reach accounts that need attention, but the change could also hurt users who usually return to the same recently used account.
Assume you have historical logs with:
- account-switcher impressions
- the ordered list of accounts shown at impression time
- unread notification counts for each account at impression time
- each account's `last_visit_at` snapshot
- which account the user selected, if any
- downstream outcomes such as notification reads, session starts, conversions, and retention
Answer the following:
1. How would you use historical data to assess whether this ranking change is likely to help before launching an experiment?
2. What metrics would you choose as primary, secondary, and guardrail metrics? Discuss tradeoffs between improving notification attention and preserving low-friction switching behavior.
3. How would you design the A/B test, including eligibility, randomization unit, exposure definition, power or MDE considerations, variance reduction, and analysis plan?
4. What sources of bias or misinterpretation should you watch for in both the historical analysis and the experiment?
Quick Answer: This question evaluates a data scientist's competency in causal inference, experimentation design, metrics selection, and observational analysis using historical logs to predict the impact of a UI ranking change.