How do you tackle unfamiliar problems?
Company: Oracle
Role: Software Engineer
Category: Behavioral & Leadership
Difficulty: medium
Interview Round: Technical Screen
## Behavioral questions
1. **Project deep dive**: Briefly introduce yourself and walk through 1–2 past projects you worked on. Pick one problem you solved that you consider especially impactful or technically challenging.
- What was the goal and scope?
- What was your role and what did you personally deliver?
- What trade-offs did you make (time, quality, complexity, cost)?
- What was the measurable outcome?
2. **Ambiguous/unfamiliar problem solving**: Describe how you would approach a problem that you are **not familiar with** and where you **don’t have complete information**.
- How do you get oriented?
- How do you gather missing requirements and validate assumptions?
- How do you reduce risk and make progress without blocking?
- How do you communicate status and decisions to stakeholders?
Quick Answer: This question evaluates problem-solving, ambiguity management, trade-off analysis, stakeholder communication, and leadership competencies in a software engineering context.
Solution
## What a strong answer should cover
### 1) Project deep dive (structure + evidence)
Use a concise STAR/CARE format to avoid rambling.
- **S/T (Situation/Task):** What was the business/user problem? What constraints mattered (latency, cost, reliability, compliance, timeline)?
- **A (Actions):** What *you* did. Emphasize decisions, ownership, and technical depth (design, implementation, debugging, rollout).
- **R (Results):** Quantify outcomes when possible: performance improvements, cost savings, availability, adoption, incident reduction.
- **Learnings:** What you would do differently, what trade-off you revisited, and what you learned.
**Good signals:**
- Clear ownership boundaries (“I led X; partnered with Y”).
- Metrics before/after (e.g., p95 latency 600ms → 220ms, infra cost -18%).
- Mentions of risk management (feature flags, canary, rollback plan).
- Awareness of stakeholders (PM, SRE, security) and communication.
**Common pitfalls:**
- Only describing the team’s work (“we did…”) with no personal contribution.
- No success criteria or measurable outcome.
- Too much implementation detail without stating the problem and impact.
---
### 2) Unfamiliar problem with incomplete information (a repeatable playbook)
A strong approach is: **clarify → frame → de-risk → iterate → communicate**.
#### Step A: Clarify the objective and constraints
Ask targeted questions to turn ambiguity into concrete requirements:
- **Goal:** What does “success” mean? Who is the user/customer?
- **Scope:** What is in/out of scope for this iteration?
- **Constraints:** Deadline, budget, tech stack, compliance, operational requirements.
- **Quality bar:** Latency/throughput/availability targets, correctness expectations.
If stakeholders aren’t available, state assumptions explicitly and confirm later.
#### Step B: Frame the problem and identify unknowns
- Break into components and write down what you know vs. what you don’t.
- Identify the **highest-risk unknowns** (technical feasibility, dependency readiness, data quality, integration complexity).
A useful technique is a quick **risk matrix**: impact × likelihood.
#### Step C: Propose hypotheses and minimal experiments
Make progress without full information by running low-cost tests:
- Build a **spike/prototype** to validate feasibility.
- Pull small samples of data to validate distributions and edge cases.
- Benchmark key operations (e.g., expected QPS, p95 latency targets).
Keep the experiment time-boxed (e.g., “2 days to answer: can we meet p95 < 300ms?”).
#### Step D: Deliver an MVP with incremental refinement
- Start with a minimal version that meets core requirements.
- Add guardrails: input validation, timeouts, retries, rate limits, and monitoring.
- Plan iterations: correctness first, then performance, then cost.
#### Step E: Communicate clearly and early
- Provide a short status update: **current understanding, assumptions, risks, next steps, ETA**.
- When blocked, offer options with trade-offs (e.g., “Option A faster but higher cost; Option B slower but cheaper”).
#### Step F: Post-solution learning
- Document what you learned and update runbooks/design docs.
- Add tests/alerts to prevent recurrence.
---
## Example answer skeleton (you can adapt)
> “When I face an unfamiliar problem with limited info, I first clarify the success metric and constraints with the stakeholder. Then I list unknowns and prioritize the riskiest ones. I time-box a prototype or data investigation to validate feasibility and key assumptions. I deliver an MVP behind a feature flag with monitoring and a rollback plan, and I communicate assumptions/risks in writing so everyone aligns. After shipping, I document learnings and add tests/alerts.”
This hits: requirements, risk management, iterative delivery, and communication—exactly what interviewers look for.