Context
An ads platform supports an Ads Pixel (a tracking script) that advertisers install on their websites/apps to send back conversion events (e.g., purchases, sign-ups). Pixel issues (broken installation, blocked requests, wrong event schema, delayed/missing events) can degrade measurement and optimization.
The product team proposes a new feature in Ads Manager:
-
The system detects potential
pixel problems
and proactively
notifies advertisers
(in-product and/or email) with suggested fixes.
Task
You are asked to evaluate whether this feature is good or bad and whether to launch it.
What you should cover
-
Define success
: propose a metric framework with
-
Primary metric(s)
-
Diagnostic metrics
-
Guardrail metrics
-
Design an experiment
(or alternative evaluation strategy if RCT isn’t feasible):
-
Unit of randomization and eligibility
-
Treatment/control definition
-
Duration and sample size / power considerations (high level)
-
Key segmentation to check heterogeneous effects
-
Identify risks and confounders
:
-
Data quality / measurement concerns (pixel data is itself unreliable)
-
Selection bias (who has a pixel, who sees notifications)
-
Interference / spillovers (agency-managed accounts, shared pixels)
-
Explain how you would interpret outcomes
and make a launch decision, including what you’d do if metrics move in opposite directions (e.g., better data quality but worse short-term revenue).