PracHub
QuestionsPremiumLearningGuidesCheatsheetNEWCoaches
|Home/Analytics & Experimentation/TikTok

Design and decompose Trust & Safety risk metrics

Last updated: Mar 29, 2026

Quick Overview

This question evaluates a data scientist's competency in Trust & Safety metric design, including defining primary and diagnostic metrics, decomposing metrics into actionable trees, and recognizing measurement and data-quality pitfalls.

  • easy
  • TikTok
  • Analytics & Experimentation
  • Data Scientist

Design and decompose Trust & Safety risk metrics

Company: TikTok

Role: Data Scientist

Category: Analytics & Experimentation

Difficulty: easy

Interview Round: Technical Screen

You are a Data Scientist in a Trust & Safety team for a short-video platform (similar to TikTok/Reels). The team asks: **“How would you design risk metrics, and how would you decompose () them?”** ## Context The platform faces multiple risk sources and controls: - **Risk types**: policy-violating content (nudity/violence/hate), spam/scams, bot activity, account takeover, misinformation. - **Detection & enforcement signals**: ML model predictions, proactive human review, reactive user reports, and enforcement actions (remove, downrank, age-gate, suspend). ## Task 1) Propose a **metric framework** to measure “risk” at the platform level. - Define **primary metric(s)** and **diagnostic / guardrail metrics**. - Specify exact definitions (numerator/denominator), unit (per view/per user/per content), and time window (e.g., daily in UTC). 2) Explain what it means to **decompose** the risk metric (“”), and provide a concrete **metric tree / breakdown** that helps identify *why* risk increased or decreased. - Show at least two different decomposition approaches (e.g., by funnel stage vs. by segment). 3) List the key **data/measurement pitfalls** and how you would address them. - Examples: label delay, selection bias from user reports, changing enforcement policies, feedback loops from downranking/removal, duplicated content, and seasonality. Your answer should be actionable for ongoing monitoring and root-cause analysis, not just high-level ideas.

Quick Answer: This question evaluates a data scientist's competency in Trust & Safety metric design, including defining primary and diagnostic metrics, decomposing metrics into actionable trees, and recognizing measurement and data-quality pitfalls.

Related Interview Questions

  • Define Ultra success metrics and detect suspicious transactions - TikTok (easy)
  • Plan DS approach for biker delivery project - TikTok (easy)
  • Define and critique a user activity metric - TikTok (easy)
  • Analyze promo anomaly and design risk guardrails - TikTok (Medium)
  • Design an interference-robust A/B test for monetization - TikTok (hard)
TikTok logo
TikTok
Nov 1, 2025, 12:00 AM
Data Scientist
Technical Screen
Analytics & Experimentation
3
0
Loading...

You are a Data Scientist in a Trust & Safety team for a short-video platform (similar to TikTok/Reels).

The team asks: “How would you design risk metrics, and how would you decompose () them?”

Context

The platform faces multiple risk sources and controls:

  • Risk types : policy-violating content (nudity/violence/hate), spam/scams, bot activity, account takeover, misinformation.
  • Detection & enforcement signals : ML model predictions, proactive human review, reactive user reports, and enforcement actions (remove, downrank, age-gate, suspend).

Task

  1. Propose a metric framework to measure “risk” at the platform level.
  • Define primary metric(s) and diagnostic / guardrail metrics .
  • Specify exact definitions (numerator/denominator), unit (per view/per user/per content), and time window (e.g., daily in UTC).
  1. Explain what it means to decompose the risk metric (“”), and provide a concrete metric tree / breakdown that helps identify why risk increased or decreased.
  • Show at least two different decomposition approaches (e.g., by funnel stage vs. by segment).
  1. List the key data/measurement pitfalls and how you would address them.
  • Examples: label delay, selection bias from user reports, changing enforcement policies, feedback loops from downranking/removal, duplicated content, and seasonality.

Your answer should be actionable for ongoing monitoring and root-cause analysis, not just high-level ideas.

Solution

Show

Comments (0)

Sign in to leave a comment

Loading comments...

Browse More Questions

More Analytics & Experimentation•More TikTok•More Data Scientist•TikTok Data Scientist•TikTok Analytics & Experimentation•Data Scientist Analytics & Experimentation
PracHub

Master your tech interviews with 7,500+ real questions from top companies.

Product

  • Questions
  • Learning Tracks
  • Interview Guides
  • Resources
  • Premium
  • For Universities
  • Student Access

Browse

  • By Company
  • By Role
  • By Category
  • Topic Hubs
  • SQL Questions
  • Compare Platforms
  • Discord Community

Support

  • support@prachub.com
  • (916) 541-4762

Legal

  • Privacy Policy
  • Terms of Service
  • About Us

© 2026 PracHub. All rights reserved.