Scenario
You are interviewing for a front-end role. Design a web application similar to a ChatGPT chat experience. The interviewer expects a front-end–focused system design, but you should also sketch the key back-end pieces at a high level.
Functional requirements
-
Users can:
-
Start a new conversation and view a list of past conversations.
-
Send a message (prompt) and receive an AI assistant reply.
-
See the assistant response
stream in progressively
("AI is typing" / token-by-token effect).
-
Scroll through message history.
-
Persistency:
-
Conversations should be
persisted
so they remain after refresh/reopen.
-
Address a follow-up: handling
storage persistency
tradeoffs (local-only vs server-backed, offline, multi-device).
Non-functional requirements
-
Responsive UI, accessible, and handles long conversations.
-
Resilient to network failures (reconnect, retry, prevent duplicate sends).
-
Reasonable performance for large histories (hundreds/thousands of messages).
What to produce
-
A high-level architecture covering
front-end, API layer, and model/inference layer
.
-
A proposed
React component breakdown
and key client-side data structures/state.
-
The
data flow
from UI → server → AI model → streamed tokens → UI.
-
How you implement the streaming/typing effect.
-
Persistency approach and edge cases (refresh, multi-tab, conflicts, offline).