Design and operate a monolith on Kubernetes
Company: Figma
Role: Software Engineer
Category: System Design
Difficulty: hard
Interview Round: HR Screen
You are joining an infra team whose backend is a single monolithic service and the company is not pursuing microservices or aggressive scaling. How would you design and operate this monolith on Kubernetes? Discuss deployment topology (pods, replicas, node pools), networking (Services, Ingress) and traffic management; configuration and secret management, CI/CD, and rollout strategies (blue/green, canary); resource requests/limits, autoscaling options, and handling stateful dependencies (databases, caches, object storage); observability (logs, metrics, traces), incident response, and disaster recovery/backups. Also compare the trade-offs of staying monolithic versus adopting microservices for this context, specify concrete signals that would justify evolving the architecture, and outline a low-risk, incremental migration path if those signals appear (service boundaries, data ownership, API contracts, and team workflows).
Quick Answer: This question evaluates a candidate's competency in designing and operating a monolithic application on Kubernetes, covering platform engineering topics such as deployment topology, networking and traffic management, CI/CD and rollout strategies, resource management and autoscaling, stateful dependencies, observability, incident response, and disaster recovery. Commonly asked in System Design interviews to assess practical application of distributed systems and SRE principles—specifically the ability to reason about operational trade-offs, reliability, and incremental migration paths—this prompt sits in the System Design domain and emphasizes practical application with architectural reasoning rather than purely conceptual understanding.