Fetch is running two A/B experiments in its mobile app.
Product context
The app displays product shelves. Each shelf contains multiple products, and the app also decides the order in which shelves are shown to users.
Experiments
-
Experiment 1: Product ordering within a shelf
-
Treatment: change the order of products inside each shelf.
-
Traffic allocation: 50% of app traffic.
-
Experiment 2: Shelf ordering in the app
-
Treatment: change the order of shelves shown in the app.
-
Traffic allocation: 9% of app traffic.
Assume the primary business goal is to improve user engagement and shopping behavior, such as product clicks, add-to-cart events, purchases, revenue, or receipt submissions.
Questions
-
What should you consider when designing and analyzing these two A/B experiments?
-
If traffic cannot be perfectly isolated between the two experiments, are the experiment results completely unusable? Why or why not?
-
If the two experiments run with overlapping traffic, how would you measure the impact of Experiment 2 on Experiment 1?
-
What metrics, guardrails, statistical checks, and interpretation risks would you include in your analysis?