How would you measure shop-ads promotion success?
Company: Meta
Role: Data Scientist
Category: Analytics & Experimentation
Difficulty: Hard
Interview Round: Technical Screen
## Context
You work on an ads ranking/serving system for an e-commerce product. A new ads algorithm is intended to **promote “shop ads”** (ads that drive users to a shop/storefront rather than a single item) in a feed/search results page.
The change may affect multiple stakeholders:
- **Users** (experience, relevance, satisfaction)
- **Advertisers/shops** (traffic, conversions, ROI)
- **Platform** (revenue, long-term retention)
Assume you can log impressions, clicks, dwell time, add-to-cart, purchases, shop follows, and revenue, but outcomes can be **delayed** and attribution can be noisy.
## Questions
1. **Define “success”** for promoting shop ads. Propose:
- One **primary (decision) metric**
- Several **diagnostic metrics** to explain movement
- Several **guardrail metrics** to prevent harming user experience or long-term health
2. For each metric, explain key **trade-offs** (e.g., revenue vs. user satisfaction; short-term vs. long-term; advertiser ROI vs. platform take-rate).
3. **How would you prove the new algorithm is useful?** Describe an evaluation plan covering:
- Offline evaluation (if any)
- Online experimentation (A/B test) design
- How you’d handle common pitfalls (selection bias, seasonality, delayed conversion, interference between users/shops, attribution changes).
## Deliverable
Write a 10–15 minute interview-style outline with the proposed metric suite, the reasoning behind it, and an end-to-end validation plan.
Quick Answer: This question evaluates a data scientist's competency in metrics design, experimental evaluation, and causal inference for ads ranking systems, including defining decision, diagnostic, and guardrail metrics while handling delayed outcomes and noisy attribution.