You are a Data Scientist in an Ads organization.
Part A — Engagement differs across two countries
You observe that engagement is meaningfully different in Country A vs Country B.
-
Define at least
two plausible engagement metrics
(e.g., DAU/WAU, sessions/user/day, time spent, D1/D7 retention, ad interactions) and explain tradeoffs.
-
Outline a structured investigation plan to determine
why
engagement differs, covering:
-
Data integrity/instrumentation and logging parity
-
Population mix / selection effects (new vs existing users, device mix, traffic sources)
-
Seasonality/holidays and product-market differences
-
Statistical issues (Simpson’s paradox, multiple comparisons)
-
Propose a minimal set of analyses (cuts, models, or decompositions) you would run and what “next actions” different outcomes would imply.
Part B — Find developers interested in advertising (“pay/call to action”)
You want to identify which developers (advertisers) are most likely to be interested in adopting ads tools.
-
Define the target outcome (label) and key funnel stages (e.g., visit → create account → create campaign → spend → retained spender).
-
Propose features/signals you’d use and how you’d avoid leakage.
-
Describe an approach to rank developers (rules vs model), and how you would evaluate it online and offline.
Part C — Evaluate a new ad format
A team launches a new ad format and asks you to measure whether it is “good”.
-
Propose:
-
A
primary success metric
(or a small set) and justification
-
Diagnostic metrics
to understand mechanism
-
Guardrail metrics
(user experience, long-term value, platform health)
-
Describe how you would design randomization and experiment rollout:
-
Unit of randomization (user, request, session, geo, advertiser)
-
Interference/spillovers and how you’d mitigate them
-
Power/MDE considerations and duration
-
If the experiment shows
no effect
, what would you do next?
-
If it shows
positive impact initially
but the effect disappears later, list plausible reasons and how you would test them.
Assume you can query logs, run experiments, and partner with engineering/product to change instrumentation if needed.