PracHub
QuestionsPremiumLearningGuidesCheatsheetNEWCoaches
|Home/Analytics & Experimentation/TikTok

Design a creator posting-frequency experiment

Last updated: Mar 29, 2026

Quick Overview

This question evaluates experimental design and causal inference competencies for a Data Scientist, including precise metric definition, randomization and interference reasoning, eligibility and ITT versus triggered analyses, sample‑size and power estimation, skewed count modeling, sequential monitoring and heterogeneity analysis within the Analytics & Experimentation domain. It is commonly asked because interviewers use it to assess end‑to‑end experimental thinking that balances statistical rigor, operational constraints and guardrails, and it requires both practical application (power calculations, triggers, model choices) and conceptual understanding (interference, SRM root‑cause reasoning); summary provided in English.

  • Medium
  • TikTok
  • Analytics & Experimentation
  • Data Scientist

Design a creator posting-frequency experiment

Company: TikTok

Role: Data Scientist

Category: Analytics & Experimentation

Difficulty: Medium

Interview Round: Onsite

You’re on the Creator Growth (PGC) team of a short‑video platform. Product proposes a push/email nudge expected to raise creators’ weekly posting frequency by 10%. Design an experiment and analysis plan: 1) Precisely define: (a) Primary metric = creator‑week posts per active creator; (b) Secondary = creator retention, viewer engagement; (c) Guardrails = viewer dissatisfaction/complaint rate, abuse reports, latency/crash rate. Write exact formulas and units for each. 2) Choose the randomization unit and targeting (creator-level vs geo/graph clusters). Justify in terms of interference/spillover (e.g., shared viewers, duet/remix features) and operational complexity. 3) Eligibility/triggering: define which creators are eligible (e.g., ≥1 post in the prior 28 days), when they are considered “treated”, and how you’ll handle creators who never open the nudge. Contrast ITT vs triggered analysis and what you ship on. 4) Power/duration: With baseline mean 1.8 posts/week (sd 2.5) among eligible creators, two‑sided α=0.05, power=90%, MDE=+4% relative on the primary metric, equal allocation—estimate required sample size and test length. State assumptions and show formulas or approximations you use. 5) Analysis: specify pre‑period adjustment (e.g., CUPED), model choice for skew/zeros (e.g., log(1+x) vs Negative Binomial), heterogeneity by geography and creator tenure, and your SRM checks. It’s 2025‑09‑01: if SRM triggers, list the top three root‑cause checks you’d run immediately. 6) Novelty/fatigue: propose a ramp strategy and a sequential‑monitoring plan that controls Type I error. 7) Suppose results show US +3% lift, BR −2% lift, and global +1% lift. What do you ship, where, and what follow‑ups do you run to validate the geo divergence?

Quick Answer: This question evaluates experimental design and causal inference competencies for a Data Scientist, including precise metric definition, randomization and interference reasoning, eligibility and ITT versus triggered analyses, sample‑size and power estimation, skewed count modeling, sequential monitoring and heterogeneity analysis within the Analytics & Experimentation domain. It is commonly asked because interviewers use it to assess end‑to‑end experimental thinking that balances statistical rigor, operational constraints and guardrails, and it requires both practical application (power calculations, triggers, model choices) and conceptual understanding (interference, SRM root‑cause reasoning); summary provided in English.

Related Interview Questions

  • Define Ultra success metrics and detect suspicious transactions - TikTok (easy)
  • Plan DS approach for biker delivery project - TikTok (easy)
  • Define and critique a user activity metric - TikTok (easy)
  • Design and decompose Trust & Safety risk metrics - TikTok (easy)
  • Analyze promo anomaly and design risk guardrails - TikTok (Medium)
TikTok logo
TikTok
Oct 13, 2025, 9:49 PM
Data Scientist
Onsite
Analytics & Experimentation
1
0
Loading...

You’re on the Creator Growth (PGC) team of a short‑video platform. Product proposes a push/email nudge expected to raise creators’ weekly posting frequency by 10%.

Design an experiment and analysis plan:

  1. Precisely define: (a) Primary metric = creator‑week posts per active creator; (b) Secondary = creator retention, viewer engagement; (c) Guardrails = viewer dissatisfaction/complaint rate, abuse reports, latency/crash rate. Write exact formulas and units for each.
  2. Choose the randomization unit and targeting (creator-level vs geo/graph clusters). Justify in terms of interference/spillover (e.g., shared viewers, duet/remix features) and operational complexity.
  3. Eligibility/triggering: define which creators are eligible (e.g., ≥1 post in the prior 28 days), when they are considered “treated”, and how you’ll handle creators who never open the nudge. Contrast ITT vs triggered analysis and what you ship on.
  4. Power/duration: With baseline mean 1.8 posts/week (sd 2.5) among eligible creators, two‑sided α=0.05, power=90%, MDE=+4% relative on the primary metric, equal allocation—estimate required sample size and test length. State assumptions and show formulas or approximations you use.
  5. Analysis: specify pre‑period adjustment (e.g., CUPED), model choice for skew/zeros (e.g., log(1+x) vs Negative Binomial), heterogeneity by geography and creator tenure, and your SRM checks. It’s 2025‑09‑01: if SRM triggers, list the top three root‑cause checks you’d run immediately.
  6. Novelty/fatigue: propose a ramp strategy and a sequential‑monitoring plan that controls Type I error.
  7. Suppose results show US +3% lift, BR −2% lift, and global +1% lift. What do you ship, where, and what follow‑ups do you run to validate the geo divergence?

Comments (0)

Sign in to leave a comment

Loading comments...

Browse More Questions

More Analytics & Experimentation•More TikTok•More Data Scientist•TikTok Data Scientist•TikTok Analytics & Experimentation•Data Scientist Analytics & Experimentation
PracHub

Master your tech interviews with 7,500+ real questions from top companies.

Product

  • Questions
  • Learning Tracks
  • Interview Guides
  • Resources
  • Premium
  • For Universities
  • Student Access

Browse

  • By Company
  • By Role
  • By Category
  • Topic Hubs
  • SQL Questions
  • Compare Platforms
  • Discord Community

Support

  • support@prachub.com
  • (916) 541-4762

Legal

  • Privacy Policy
  • Terms of Service
  • About Us

© 2026 PracHub. All rights reserved.