This question evaluates product analytics, experimental design, and causal thinking for content-moderation algorithms—specifically metric specification, trade-off/harm analysis, and online experiment logistics—and is commonly asked to gauge a data scientist’s ability to balance detection accuracy, stakeholder impacts, and business objectives in production features; it is in the Analytics & Experimentation category for a Data Scientist position. At a high abstraction level it probes system-level reasoning around problem scoping, failure modes, metric frameworks, A/B or quasi-experiment setup, and post-launch monitoring without requiring implementation-level detail.
You are interviewing for a Meta DSA (product analytics / data science) role. The product team is launching a new Stolen Post Detection algorithm that flags posts suspected of being copied/reposted without attribution, and then triggers actions (e.g., downrank, warning label, creator notification, or removal).
Design an evaluation plan covering: