PracHub
QuestionsPremiumLearningGuidesCheatsheetNEWCoaches
|Home/System Design/Anthropic

Design a Concurrent Domain Crawler

Last updated: Mar 29, 2026

Quick Overview

This question evaluates a candidate's understanding of concurrent system design, web crawling fundamentals, URL frontier organization, duplicate detection, concurrency control, network I/O models, fault tolerance, and politeness mechanisms such as throttling.

  • hard
  • Anthropic
  • System Design
  • Software Engineer

Design a Concurrent Domain Crawler

Company: Anthropic

Role: Software Engineer

Category: System Design

Difficulty: hard

Interview Round: Technical Screen

Design a crawler that starts from one seed URL and explores all reachable pages in the same domain efficiently. Discuss: - How you would structure the frontier of URLs to visit. - How to prevent duplicate fetches across concurrent workers. - Whether you would use asynchronous I/O, multithreading, or multiprocessing, and why. - How to detect when the crawl is complete. - How to handle failures, timeouts, and back-pressure. - How to enforce politeness limits such as request throttling. Assume the workload is primarily fetching web pages over the network.

Quick Answer: This question evaluates a candidate's understanding of concurrent system design, web crawling fundamentals, URL frontier organization, duplicate detection, concurrency control, network I/O models, fault tolerance, and politeness mechanisms such as throttling.

Related Interview Questions

  • Design a one-to-one chat system - Anthropic (medium)
  • Design One-to-One Chat - Anthropic (medium)
  • How to stream a large file to 1000 hosts fastest - Anthropic (medium)
  • Design guardrails and fallback for LLM reliability - Anthropic (hard)
  • Design a Crash-Resilient LRU Cache - Anthropic (hard)
Anthropic logo
Anthropic
Jan 6, 2026, 12:00 AM
Software Engineer
Technical Screen
System Design
10
0

Design a crawler that starts from one seed URL and explores all reachable pages in the same domain efficiently.

Discuss:

  • How you would structure the frontier of URLs to visit.
  • How to prevent duplicate fetches across concurrent workers.
  • Whether you would use asynchronous I/O, multithreading, or multiprocessing, and why.
  • How to detect when the crawl is complete.
  • How to handle failures, timeouts, and back-pressure.
  • How to enforce politeness limits such as request throttling.

Assume the workload is primarily fetching web pages over the network.

Solution

Show

Comments (0)

Sign in to leave a comment

Loading comments...

Browse More Questions

More System Design•More Anthropic•More Software Engineer•Anthropic Software Engineer•Anthropic System Design•Software Engineer System Design
PracHub

Master your tech interviews with 7,500+ real questions from top companies.

Product

  • Questions
  • Learning Tracks
  • Interview Guides
  • Resources
  • Premium
  • For Universities
  • Student Access

Browse

  • By Company
  • By Role
  • By Category
  • Topic Hubs
  • SQL Questions
  • Compare Platforms
  • Discord Community

Support

  • support@prachub.com
  • (916) 541-4762

Legal

  • Privacy Policy
  • Terms of Service
  • About Us

© 2026 PracHub. All rights reserved.