PracHub
QuestionsPremiumLearningGuidesCheatsheetNEWCoaches
|Home/Analytics & Experimentation/TikTok

Design robust A/B test with interference and seasonality

Last updated: Mar 29, 2026

Quick Overview

This question evaluates expertise in experimental design, causal inference, statistical power and minimum detectable effect calculation, variance-reduction techniques, sequential monitoring, and diagnostics for interference, spillovers, and weekly seasonality in A/B testing.

  • hard
  • TikTok
  • Analytics & Experimentation
  • Data Scientist

Design robust A/B test with interference and seasonality

Company: TikTok

Role: Data Scientist

Category: Analytics & Experimentation

Difficulty: hard

Interview Round: Technical Screen

You are launching a redesigned onboarding flow expected to increase Day-7 activation but may cause network effects (users invite others) and weekly seasonality. Design an experiment plan that covers: (1) hypothesis, primary metric(s), guardrail metrics, and exact metric definitions with attribution windows; (2) unit of randomization and exposure (user, household, geo, or cluster) and why, given potential interference; (3) sample size and power analysis, target MDE, duration assumptions, and how you’ll account for seasonality (e.g., full-week multiples); (4) variance reduction (e.g., CUPED with pre-period covariates), stratification, or geo-matched pairs; (5) SRM detection and remediation plan; (6) sequential monitoring approach and stopping rules (alpha spending) to avoid p-hacking; (7) a ramp plan with holdouts and a plan for novelty/learning effects; (8) diagnostics for noncompliance, bot traffic, and triggered vs. assigned populations; (9) how you’d detect and mitigate interference/spillovers (cluster randomization, geo experiments, or switchback) and quantify any bias if user-level randomization is used; (10) interpretation plan if primary and guardrail metrics disagree, and how you’d decide to ship.

Quick Answer: This question evaluates expertise in experimental design, causal inference, statistical power and minimum detectable effect calculation, variance-reduction techniques, sequential monitoring, and diagnostics for interference, spillovers, and weekly seasonality in A/B testing.

Related Interview Questions

  • Define Ultra success metrics and detect suspicious transactions - TikTok (easy)
  • Plan DS approach for biker delivery project - TikTok (easy)
  • Define and critique a user activity metric - TikTok (easy)
  • Design and decompose Trust & Safety risk metrics - TikTok (easy)
  • Analyze promo anomaly and design risk guardrails - TikTok (Medium)
TikTok logo
TikTok
Oct 13, 2025, 9:49 PM
Data Scientist
Technical Screen
Analytics & Experimentation
2
0

Experiment Design: Redesigned Onboarding with Network Effects and Weekly Seasonality

Background

You are launching a redesigned onboarding flow for a consumer social app. The redesign is expected to increase Day-7 activation among new users. However, onboarding can induce network effects (e.g., users invite others) and there is known weekly seasonality in traffic and behavior.

Task

Design a rigorous experiment plan that addresses the following:

  1. Hypothesis and metrics:
    • State the hypothesis and null/alternative.
    • Specify primary metric(s), guardrail metrics, and network-effect secondary metrics.
    • Provide exact metric definitions, denominators, attribution rules, and time windows.
  2. Randomization and exposure:
    • Choose the unit of randomization and exposure (user, household/device, geo, or graph/cluster) and justify your choice given potential interference from invites.
  3. Power, MDE, and duration:
    • Provide sample size and power analysis, target MDE, and duration assumptions.
    • Explain how you will account for weekly seasonality (e.g., run for multiples of full weeks and allow Day-7 windows to mature).
  4. Variance reduction:
    • Describe techniques such as CUPED with pre-period covariates, stratification, regression adjustment, or geo-matched pairs.
  5. SRM (sample ratio mismatch):
    • Define how you will detect SRM and what you will do if you find it.
  6. Sequential monitoring:
    • Provide a sequential monitoring and stopping plan (e.g., alpha spending) to avoid p-hacking.
  7. Ramp and holdouts:
    • Propose a ramp plan with holdouts and how you will handle novelty and learning effects.
  8. Data quality diagnostics:
    • Describe diagnostics for noncompliance, bot/invalid traffic, and triggered vs. assigned populations.
  9. Interference/spillovers:
    • Explain how you would detect and mitigate interference (cluster randomization, geo experiments, or switchback) and how you would quantify any bias if you end up using user-level randomization.
  10. Decision framework:
  • Describe how you would interpret outcomes if the primary and guardrail metrics disagree, and how you would decide to ship.

Solution

Show

Comments (0)

Sign in to leave a comment

Loading comments...

Browse More Questions

More Analytics & Experimentation•More TikTok•More Data Scientist•TikTok Data Scientist•TikTok Analytics & Experimentation•Data Scientist Analytics & Experimentation
PracHub

Master your tech interviews with 7,500+ real questions from top companies.

Product

  • Questions
  • Learning Tracks
  • Interview Guides
  • Resources
  • Premium
  • For Universities
  • Student Access

Browse

  • By Company
  • By Role
  • By Category
  • Topic Hubs
  • SQL Questions
  • Compare Platforms
  • Discord Community

Support

  • support@prachub.com
  • (916) 541-4762

Legal

  • Privacy Policy
  • Terms of Service
  • About Us

© 2026 PracHub. All rights reserved.