PracHub
QuestionsPremiumLearningGuidesCheatsheetNEWCoaches
|Home/System Design/Anthropic

Design a distributed web crawler

Last updated: Mar 29, 2026

Quick Overview

This question evaluates the ability to design scalable, fault-tolerant distributed systems for web crawling, covering competencies such as URL deduplication, politeness (rate limiting and robots.

  • medium
  • Anthropic
  • System Design
  • Software Engineer

Design a distributed web crawler

Company: Anthropic

Role: Software Engineer

Category: System Design

Difficulty: medium

Interview Round: Technical Screen

## Problem Design a web crawler that starts from one or more seed URLs and continuously discovers and fetches pages. ### Requirements - **Inputs:** One or more seed URLs. - **Outputs:** Fetched page contents and metadata stored for later indexing/analysis. - **Core goals:** - Crawl at scale (large number of pages). - Avoid crawling the same URL repeatedly (deduplication). - Be polite: respect per-host rate limits and `robots.txt`. - Be fault-tolerant (workers can crash; crawl should continue). ### Follow-up How would you redesign/optimize the crawler when you have **multiple servers** (many crawler workers)? You don’t need to implement code—describe the architecture and key data structures/services.

Quick Answer: This question evaluates the ability to design scalable, fault-tolerant distributed systems for web crawling, covering competencies such as URL deduplication, politeness (rate limiting and robots.

Related Interview Questions

  • Design a one-to-one chat system - Anthropic (medium)
  • Design One-to-One Chat - Anthropic (medium)
  • How to stream a large file to 1000 hosts fastest - Anthropic (medium)
  • Design guardrails and fallback for LLM reliability - Anthropic (hard)
  • Design a Crash-Resilient LRU Cache - Anthropic (hard)
Anthropic logo
Anthropic
Jan 6, 2026, 12:00 AM
Software Engineer
Technical Screen
System Design
74
0
Loading...

Problem

Design a web crawler that starts from one or more seed URLs and continuously discovers and fetches pages.

Requirements

  • Inputs: One or more seed URLs.
  • Outputs: Fetched page contents and metadata stored for later indexing/analysis.
  • Core goals:
    • Crawl at scale (large number of pages).
    • Avoid crawling the same URL repeatedly (deduplication).
    • Be polite: respect per-host rate limits and robots.txt .
    • Be fault-tolerant (workers can crash; crawl should continue).

Follow-up

How would you redesign/optimize the crawler when you have multiple servers (many crawler workers)? You don’t need to implement code—describe the architecture and key data structures/services.

Solution

Show

Comments (0)

Sign in to leave a comment

Loading comments...

Browse More Questions

More System Design•More Anthropic•More Software Engineer•Anthropic Software Engineer•Anthropic System Design•Software Engineer System Design
PracHub

Master your tech interviews with 7,500+ real questions from top companies.

Product

  • Questions
  • Learning Tracks
  • Interview Guides
  • Resources
  • Premium
  • For Universities
  • Student Access

Browse

  • By Company
  • By Role
  • By Category
  • Topic Hubs
  • SQL Questions
  • Compare Platforms
  • Discord Community

Support

  • support@prachub.com
  • (916) 541-4762

Legal

  • Privacy Policy
  • Terms of Service
  • About Us

© 2026 PracHub. All rights reserved.