Scenario
You are given a small codebase for a web chat UI that calls an LLM API (similar to a ChatGPT-style interface). You can run it locally. During a 60-minute interview, you must debug issues and implement small features.
Assume a typical setup:
-
Frontend chat UI (e.g., React/Vue) with a client-side store/state management layer.
-
A backend (or direct client call) that invokes an LLM API.
-
Responses may be
streamed
token-by-token.
Tasks
Bug 1: API key invalid
When starting the app and sending a message, the browser console (or network response) shows an error like api key invalid.
-
Find the root cause.
-
Fix the configuration so requests authenticate correctly.
Bug 2: Message renders as empty
After submitting a message, the UI receives a response but renders empty content.
-
Inspect the response payload.
-
Fix the parsing/rendering logic so the assistant text displays correctly.
Feature 1: Add a Clear button
Add a button that clears:
-
the visible message history
-
any stored conversation context
-
and resets the chat state to an initial empty state
You are told the store already has an action/helper that can reset/initialize state—use it rather than re-implementing.
Feature 2: Add a Stop button for streaming
Add a button (like the “stop generating” square icon) that, while an assistant response is streaming, immediately stops/cancels the in-flight stream and leaves the partially-generated text as-is (or per your chosen UX, but be consistent).
Consider:
-
state management changes (e.g.,
isStreaming
,
currentRequestId
)
-
request cancellation (e.g.,
AbortController
)
-
cleanup so future messages still work
What we are evaluating
-
Your debugging workflow (repro, isolate, verify)
-
Correctness of fixes
-
Practical state management and cancellation handling
-
Ability to reason about API contract mismatches (payload shape vs. assumptions)