PracHub
QuestionsPremiumLearningGuidesCheatsheetNEWCoaches
|Home/Analytics & Experimentation/TikTok

Design an interference-robust A/B test for monetization

Last updated: Mar 29, 2026

Quick Overview

This question evaluates experimental design and causal inference skills within Analytics & Experimentation, emphasizing interference mitigation, clustering and randomization choices, eligibility and exposure rules, metric hierarchy and guardrails, power and duration planning, variance-reduction techniques, and cluster-robust inference in a two-sided marketplace. It is commonly asked to test a data scientist's practical-application ability to balance monetization and growth trade-offs through precise statistical planning, pre-registered stopping rules, SRM detection, MDE and rollout decision thresholds rather than purely conceptual understanding.

  • hard
  • TikTok
  • Analytics & Experimentation
  • Data Scientist

Design an interference-robust A/B test for monetization

Company: TikTok

Role: Data Scientist

Category: Analytics & Experimentation

Difficulty: hard

Interview Round: HR Screen

You’re launching a new tipping UI on creator (PGC/OGC) posts to motivate monetization without hurting growth or traffic. Design an A/B test that is robust to interference and supply–demand dynamics. Specify: 1) randomization unit and clustering strategy (e.g., creator-level, ego-network, geo-level) to mitigate cross-user and cross-post spillovers; 2) eligibility and exposure rules to prevent treatment contamination across US and Asia time zones; 3) primary metric hierarchy (e.g., payer conversion per DAU, ARPPU, creator revenue share) and guardrails (retention, session length, abuse reports, ad revenue cannibalization); 4) power and duration targeting at least one weekly cycle, ramp plan, and SRM detection with pre-registered stop rules; 5) variance reduction (CUPED covariates such as pre-experiment spend and creator popularity) and cluster-robust inference; 6) decision thresholds, and how you’d roll out if treatment helps monetization but slightly hurts growth. Be specific about exact formulas and the minimal detectable effect you target.

Quick Answer: This question evaluates experimental design and causal inference skills within Analytics & Experimentation, emphasizing interference mitigation, clustering and randomization choices, eligibility and exposure rules, metric hierarchy and guardrails, power and duration planning, variance-reduction techniques, and cluster-robust inference in a two-sided marketplace. It is commonly asked to test a data scientist's practical-application ability to balance monetization and growth trade-offs through precise statistical planning, pre-registered stopping rules, SRM detection, MDE and rollout decision thresholds rather than purely conceptual understanding.

Related Interview Questions

  • Define Ultra success metrics and detect suspicious transactions - TikTok (easy)
  • Plan DS approach for biker delivery project - TikTok (easy)
  • Define and critique a user activity metric - TikTok (easy)
  • Design and decompose Trust & Safety risk metrics - TikTok (easy)
  • Analyze promo anomaly and design risk guardrails - TikTok (Medium)
TikTok logo
TikTok
Oct 13, 2025, 9:49 PM
Data Scientist
HR Screen
Analytics & Experimentation
5
0

A/B Test Design: New Tipping UI on Creator Posts

Context: You are launching a new tipping UI on creator (PGC/OGC) posts to increase creator monetization. The test must be robust to cross-user interference and two‑sided marketplace (viewer–creator) supply–demand dynamics, while protecting growth and traffic.

Design an experiment that addresses the following:

  1. Randomization unit and clustering
    • Choose a randomization unit and clustering strategy (e.g., creator-level, ego-network, geo-level) to mitigate cross-user and cross-post spillovers and marketplace substitution.
  2. Eligibility and exposure rules
    • Define eligibility and exposure rules that prevent treatment contamination across regions and time zones (e.g., US and Asia), including assignment persistence and start/stop timing.
  3. Metrics and guardrails
    • Provide a primary metric hierarchy (e.g., payer conversion per DAU, ARPPU, creator revenue share) and guardrails (e.g., retention, session length, abuse reports, ad revenue cannibalization). Include precise metric formulas.
  4. Power, duration, ramp, and SRM
    • Power and duration that cover at least one weekly cycle; a ramp plan; sample ratio mismatch (SRM) detection; and pre-registered stopping rules.
  5. Variance reduction and inference
    • Variance reduction (e.g., CUPED covariates such as pre-experiment spend and creator popularity) and the exact cluster-robust inference approach.
  6. Decision thresholds and rollout
    • Decision thresholds and how you would roll out if treatment improves monetization but slightly hurts growth. Be specific about exact formulas and the minimal detectable effect (MDE) you target.

Solution

Show

Comments (0)

Sign in to leave a comment

Loading comments...

Browse More Questions

More Analytics & Experimentation•More TikTok•More Data Scientist•TikTok Data Scientist•TikTok Analytics & Experimentation•Data Scientist Analytics & Experimentation
PracHub

Master your tech interviews with 7,500+ real questions from top companies.

Product

  • Questions
  • Learning Tracks
  • Interview Guides
  • Resources
  • Premium
  • For Universities
  • Student Access

Browse

  • By Company
  • By Role
  • By Category
  • Topic Hubs
  • SQL Questions
  • Compare Platforms
  • Discord Community

Support

  • support@prachub.com
  • (916) 541-4762

Legal

  • Privacy Policy
  • Terms of Service
  • About Us

© 2026 PracHub. All rights reserved.