PracHub
QuestionsPremiumLearningGuidesCheatsheetNEWCoaches
|Home/Analytics & Experimentation/Snapchat

Design and analyze a banner A/B test

Last updated: Mar 29, 2026

Quick Overview

This question evaluates a candidate's competence in designing and analyzing A/B experiments, covering randomization and exposure decisions, precise metric definitions and guardrails, sample size and power calculations, analysis plans for multiple comparisons and covariate adjustment, and diagnostic validation within the Analytics & Experimentation domain. It is commonly asked because it tests both conceptual understanding and practical application of experimentation methodology—assessing the ability to manage measurement issues, statistical assumptions, and explicit decision rules that determine whether product changes are supported by the data.

  • hard
  • Snapchat
  • Analytics & Experimentation
  • Data Scientist

Design and analyze a banner A/B test

Company: Snapchat

Role: Data Scientist

Category: Analytics & Experimentation

Difficulty: hard

Interview Round: Onsite

You are deciding whether to add a home-page banner. Design and analyze the A/B test end to end: 1) Randomization unit and exposure (user-level vs session-level); address cross-session consistency and interference. 2) Define primary and guardrail metrics, including CTR, dwell time after click, retention, and revenue per session. Precisely define accidental clicks and how to exclude or reweight them (e.g., dwell < 500 ms or immediate back within 2 s). 3) Powering: with baseline CTR = 1.5% and expected relative lift = 10%, compute the per-arm sample size for 90% power and α = 0.05 (two-sided); show formulas and assumptions (pooled variance, continuity correction optional). 4) Analysis: handle multiple banner placements (multiple comparisons), position bias, and novelty effects; specify CUPED or pre-period covariate adjustment. 5) Diagnostic checks: exposure logging, ratio checks, bot filtering, and sequential monitoring with proper alpha spending. 6) Decision rule: state explicit launch criteria and fallback if accidental-click rates spike.

Quick Answer: This question evaluates a candidate's competence in designing and analyzing A/B experiments, covering randomization and exposure decisions, precise metric definitions and guardrails, sample size and power calculations, analysis plans for multiple comparisons and covariate adjustment, and diagnostic validation within the Analytics & Experimentation domain. It is commonly asked because it tests both conceptual understanding and practical application of experimentation methodology—assessing the ability to manage measurement issues, statistical assumptions, and explicit decision rules that determine whether product changes are supported by the data.

Related Interview Questions

  • Design an experiment for spam filtering impact - Snapchat (hard)
  • Decide whether to launch Group Story - Snapchat (Medium)
  • Design A/B Test for New Recommendation Algorithm Launch - Snapchat (medium)
  • Design A/B Tests for Banner Ad and Group-Story Feature - Snapchat (medium)
  • Determine Optimal Energy Project for 10% ROI Target - Snapchat (medium)
Snapchat logo
Snapchat
Oct 13, 2025, 9:49 PM
Data Scientist
Onsite
Analytics & Experimentation
3
0

A/B Test Design: Home-Page Banner

You are deciding whether to add a home-page banner in a consumer app. Design and analyze the A/B test end-to-end. Assume a typical logged-in user base with multiple sessions per user. Where needed, make minimal assumptions explicit so a first-time reader can follow.

  1. Randomization and exposure
  • Choose the randomization unit (user-level vs session-level). Address cross-session consistency, eligibility/exposure, and potential interference.
  1. Metrics (primary and guardrails)
  • Define: CTR, dwell time after click, retention, and revenue per session. Provide precise metric formulas and denominators.
  • Precisely define accidental clicks and explain how to exclude or reweight them (e.g., dwell < 500 ms or immediate back within 2 s).
  1. Powering
  • Given baseline CTR = 1.5% and expected relative lift = 10% (so 1.65% in treatment), compute per-arm sample size for 90% power and α = 0.05 (two-sided). Show formulas, assumptions (e.g., pooled variance), and any adjustments (e.g., trigger rate, clustering).
  1. Analysis plan
  • Handle multiple banner placements (multiple comparisons), position bias, and novelty effects.
  • Specify whether and how to use CUPED or pre-period covariate adjustment.
  1. Diagnostic checks
  • Exposure logging validation, sample-ratio checks, bot filtering, and sequential monitoring with proper alpha spending.
  1. Decision rule
  • State explicit ship/no-ship criteria and a fallback plan if accidental-click rates spike.

Solution

Show

Comments (0)

Sign in to leave a comment

Loading comments...

Browse More Questions

More Analytics & Experimentation•More Snapchat•More Data Scientist•Snapchat Data Scientist•Snapchat Analytics & Experimentation•Data Scientist Analytics & Experimentation
PracHub

Master your tech interviews with 7,500+ real questions from top companies.

Product

  • Questions
  • Learning Tracks
  • Interview Guides
  • Resources
  • Premium
  • For Universities
  • Student Access

Browse

  • By Company
  • By Role
  • By Category
  • Topic Hubs
  • SQL Questions
  • Compare Platforms
  • Discord Community

Support

  • support@prachub.com
  • (916) 541-4762

Legal

  • Privacy Policy
  • Terms of Service
  • About Us

© 2026 PracHub. All rights reserved.