##### Question
Describe a time you worked on a project without any guidance. How did you proceed? Tell me about a time you received meaningful feedback—what was it and how did you act on it? How do you respond when your ideas face strong push-back from others? Give an example of when you and your manager disagreed on project priorities. How did you resolve it?
Quick Answer: This question evaluates a software engineer's ownership, resilience, communication, receptiveness to feedback, influence without authority, and prioritization and conflict-resolution skills.
Solution
# How to Approach These Questions
Use STAR consistently:
- Situation: Brief context (team, product, deadline, constraints).
- Task: Your objective and success criteria.
- Action: What you did—decisions, trade-offs, communication, tools.
- Result: Quantified impact, lessons, and follow-ups.
Prep checklist:
- Pick 4 distinct stories from different contexts (feature delivery, production incident, cross-team project, feedback growth).
- Attach metrics: latency, error rate, deployment frequency, review time, user adoption, OKR movement.
- Practice 2–3 minute delivery; lead with the headline outcome.
Pitfalls to avoid:
- Vague results (“it went well”) or team-only credit without your contribution.
- Over-indexing on blame; instead, focus on systems and learning.
- Long setup; spend most time on Actions and Results.
---
1) Working With Little or No Guidance
What the interviewer wants:
- Ownership, bias to action, ability to reduce ambiguity, risk management.
Structure your answer:
- Define the problem yourself (requirements, constraints, stakeholders).
- Create a plan (design doc/RFC, milestones, success metrics).
- De-risk with spikes/prototypes; get periodic check-ins.
- Ship, measure, iterate.
Sample STAR answer:
- Situation: Our CI pipeline had frequent flaky test failures causing developer slowdowns. No one owned it, and leadership asked for improvement but provided no plan.
- Task: Within a quarter, reduce CI flakiness and improve build reliability, targeting ≤2% flaky rate and <30 min avg build time.
- Action: I interviewed 6 frequent CI users, defined four root causes (test order dependency, network mocks, resource contention, and timeouts), and wrote a short RFC with phased milestones. I built a small dashboard flagging flaky tests using historical runs, added deterministic test ordering, containerized mocks, and parallelized steps. I scheduled biweekly 15-minute stakeholder reviews.
- Result: Flaky test rate dropped from ~11% to 1.8% in 8 weeks; average build time decreased 22% (38 → 29.5 min). Dev throughput (PRs merged per week) rose 15%. I handed off ownership with runbooks and alerts.
Why this works: It shows self-directed scoping, lightweight alignment mechanisms, incremental delivery, and measured impact.
---
2) Receiving Meaningful Feedback and Acting on It
What the interviewer wants:
- Coachability, growth mindset, and observable behavior change.
Structure your answer:
- Share specific, actionable feedback you received.
- Explain what you changed (process, habits, tools) and how you measured improvement.
- Note how you solicited further feedback and scaled learning to others.
Sample STAR answer:
- Situation: In quarterly feedback, peers noted my PRs were large and hard to review, delaying merges.
- Task: Improve reviewability and team velocity without sacrificing quality.
- Action: I adopted an RFC-first approach for non-trivial changes, split work into feature flags, and targeted PRs <300 lines. I added clear test plans, screenshots for UI, and tagged reviewers by area. I also set a personal SLA to respond to comments within 24 hours.
- Result: Median review time fell 35% (from 17 hours to 11). Merge reverts dropped from 3 per quarter to 0 for the next two quarters. Two teammates adopted the RFC + small-PR pattern; our squad’s deployment frequency increased from 3 to 5 per week.
Why this works: It shows you transformed feedback into measurable team-level improvements and institutionalized the change.
---
3) Handling Strong Pushback on Your Ideas
What the interviewer wants:
- Influence without authority, data-driven reasoning, collaboration, and respect for constraints.
Structure your answer:
- Clarify the concern behind the pushback (risk, scope, timeline, complexity).
- Seek common goals, propose a small experiment/spike, and define evaluation metrics.
- Document decisions (ADR/RFC) and agree on revisit criteria.
Sample STAR answer:
- Situation: I proposed migrating an internal service-to-service API to gRPC to cut latency. Several senior engineers pushed back over migration risk and maintenance.
- Task: Build alignment or find a lower-risk alternative to improve latency for the critical path.
- Action: I documented an ADR comparing REST vs gRPC across latency, payload, tooling, and rollout risk. I ran a one-week spike: a shadow gRPC endpoint and dual-write client, measuring P50/P95 latency and error rates under load. I proposed a staged rollout (10%→50%→100%) with fast rollback.
- Result: The spike showed 18% P95 latency improvement and no error regression. We agreed on a limited-scope rollout for the hot path only. After rollout, end-to-end P95 latency dropped 12%, improving page load time and reducing timeouts by 9%. We kept REST for non-critical paths, minimizing migration risk.
Why this works: It balances conviction with humility, uses data and experiments, and finds a pragmatic compromise.
---
4) Disagreeing With Your Manager on Priorities
What the interviewer wants:
- Managing up, aligning to goals/OKRs, and making trade-offs explicit.
Structure your answer:
- Translate your viewpoint into business/user impact and risk terms.
- Offer options with costs/benefits, propose time-bounded experiments, and align on decision criteria.
Sample STAR answer:
- Situation: My manager prioritized a new feature for a launch; I believed we needed to address reliability issues causing weekly incidents.
- Task: Align on a plan that met launch goals without compounding risk.
- Action: I compiled incident data (4 Sev-2s in 6 weeks; MTTR ~70 minutes) and estimated the opportunity cost (developer hours lost, user churn during incidents). I proposed a split plan: reserve 20% capacity for reliability (“error budget” work) for 3 sprints, with clear exit criteria (reduce error rate from 0.9% to <0.3%, cut MTTR to <30 minutes) while delivering MVP scope behind flags. We agreed on de-scoping two lower-impact feature items to protect the reliability buffer.
- Result: Error rate dropped to 0.28%, MTTR to 26 minutes, and we still hit the launch date. Post-launch support load fell 40%, and feature adoption reached 32% of active users in two weeks. We later formalized a monthly error-budget review.
Why this works: It shows principled prioritization, risk framing, and a collaborative, metrics-driven compromise.
---
Validation and Guardrails
- Timebox your answers and lead with the Result: “We cut flaky tests from 11% to 1.8% by…”
- Use numbers even if approximate; tie them to goals/OKRs.
- Avoid confidential or sensitive details; abstract service names if needed.
- If you lack direct examples, adapt from internships, open-source, or academic team projects, but keep engineering specifics (design docs, code reviews, testing strategy).
Quick rehearsal template (fill-in):
- Situation: [Team/product], [problem], [constraints].
- Task: [Goal], [success metric/OKR].
- Action: [Top 3–4 actions you personally took], [trade-offs/experiments].
- Result: [Quantified impact], [what changed for users/business], [lesson you applied later].