How would you measure Group Call success?
Company: Meta
Role: Data Scientist
Category: Analytics & Experimentation
Difficulty: medium
Interview Round: Technical Screen
You are interviewing for a **Data Scientist** role at a social communication product similar to Meta. The team asks you to evaluate a **Group Call** feature that lets multiple users join the same voice or video call.
Design a measurement framework for this feature.
Please address all of the following:
1. Define the product goal and the most important **north-star metric**.
2. Propose a set of **success metrics**, including adoption, engagement, and quality metrics.
3. Explain what **user retention** means in this context. Be explicit about the difference between:
- overall app retention,
- feature-specific retention for Group Call,
- retention of call creators vs invited participants.
4. Discuss how to define and use **7-day retention** and **28-day retention**. Include:
- exact formulas,
- which user cohort you would use,
- when 7-day retention is more informative,
- when 28-day retention is more informative.
5. Explain the **short-term vs long-term tradeoff**. For example, a change could increase call starts or call duration in the short run but hurt long-term user experience.
6. If the team launches a new Group Call improvement and wants to run an experiment, explain:
- the primary metric,
- key guardrail metrics,
- major sources of bias or confounding,
- any interference or network effects that might make experimentation difficult.
7. Mention important segmentations you would check before making a launch decision.
You should assume the product is used globally across new and existing users, and that Group Call usage may be less frequent than ordinary one-to-one messaging.
Quick Answer: This question evaluates a data scientist's ability to design a measurement framework encompassing product goal definition, north-star metric selection, success metrics for adoption, engagement and quality, precise retention definitions (app-level vs feature-specific and creator vs participant), cohort retention windows, and experimental design including guardrails, bias sources and network effects. It is asked in the Analytics & Experimentation domain to assess practical application of product analytics and quantitative reasoning about short-term versus long-term tradeoffs, segmentation and experimental validity, operating at a primarily practical application level with necessary conceptual understanding.