This question evaluates a data scientist's experimental design and causal inference skills, covering A/B test setup, unit of randomization and exposure, metric selection and guardrails, sample size and power calculations, and identification of biases and logging or implementation pitfalls in the Analytics & Experimentation domain.
A company is preparing to roll out a new in‑app recommendation widget and needs evidence that it improves user engagement.
Design an A/B experiment to evaluate the widget’s impact on daily active users (DAU) and session length.
Address the following:
Hints: Consider metric sensitivity, power calculations, novelty effects, and logging bias.
Login required