System Design: Detect weapon-related posts
Design a system for a social media platform to detect and moderate posts that contain weapons.
A "post" may include:
-
Text
(caption/body)
-
Images
(one or more)
-
Optionally
short video
(assume you can treat it as sampled frames)
Requirements
-
Real-time decisioning
for newly created posts:
-
Either
allow
,
block
, or
send to human review
.
-
Support
high throughput
(assume millions of posts/day) with
low latency
for the user-facing publish flow.
-
Handle
model uncertainty
and reduce harm:
-
Minimize false negatives for clearly dangerous content.
-
Keep false positives low enough to avoid mass user frustration.
-
Provide
auditability
(why was content blocked) and
appeals
.
-
Include an approach for
continuous improvement
(feedback loop, retraining, drift detection).
What to cover
-
API + data flow and key components
-
Storage and messaging choices
-
ML model serving (online vs offline, multi-modal)
-
Human-in-the-loop moderation workflow
-
Metrics/SLOs and monitoring
-
Failure modes and abuse/attack considerations (adversarial content, evasion)