This question evaluates a data scientist's competency in lifecycle email optimization, attribution of incremental on-site engagement, experiment design, metric selection, and trade-offs involving deliverability and guardrails within the Analytics & Experimentation domain.

You own lifecycle email for a large consumer app and are tasked with increasing on‑site engagement that is truly attributable to email (incremental, not last‑click). Assume you can run controlled experiments and have standard delivery/behavioral logging.
a) Define one primary objective metric for “on‑site engagement attributable to email,” and 2–3 guardrail metrics (e.g., unsubscribe rate, spam complaint rate, session depth).
b) Propose at least 10 concrete interventions across targeting, timing, content, and system levers (e.g., send‑time optimization, subject/body variants, frequency caps, personalized recommendations, reactivation cohorts, triggered vs batch, pre‑header tests, multi‑subject holdouts, copy length, AMP/email actions). For each intervention, estimate expected lift and note key risks.
c) Select your top two ideas and design robust experiments for each:
d) List all instrumentation/data required (e.g., deliveries, opens, clicks, device, locale, user email eligibility, prior activity) and explain how you will detect cannibalization with push/notifications.
e) Prioritize your interventions with RICE/ICE and propose a safe ramp plan if early wins improve the primary metric but harm a guardrail.
Login required