Quantify impact of your projects using STAR
Company: Scale AI
Role: Software Engineer
Category: Behavioral & Leadership
Difficulty: medium
Interview Round: Technical Screen
Interview-style behavioral prompt:
"Pick one or two of your impactful projects and walk me through them in detail. Focus on what *you* did and how you measured success."
Structure your answer using the **STAR** method, with particular emphasis on **quantifying results**:
- **Situation**: Briefly describe the context, scale, and importance of the project.
- **Task**: Your specific responsibilities and goals.
- **Action**: The concrete technical and non-technical steps you took.
- **Result**: Quantify impact where possible, such as:
- Performance gains (latency, throughput improvements).
- Reliability gains (reduced incident count, MTTR, error rate).
- Productivity gains (build times, deployment frequency, fewer manual steps).
- Business metrics (revenue, conversion rate, retention, cost savings).
Prepare at least one example where you can give specific numbers (even approximate), and be ready to explain how you obtained or estimated those metrics.
Quick Answer: This question evaluates a candidate's ability to use the STAR method to articulate project ownership, quantify technical and business impact, and demonstrate leadership, communication, and metrics literacy for a Software Engineer role.
Solution
This type of question probes whether you understand your **impact**, not just your activities. A strong answer emphasizes measurable outcomes and your personal contribution.
---
### 1. Choose the right project(s)
Pick projects that are:
- **Meaningful**: Visible to the business or users (not purely cosmetic refactors).
- **Quantifiable**: You can connect them to metrics.
- **Role-appropriate**: For senior roles, prefer cross-team or system-level impact.
Examples:
- Latency or throughput improvements for a critical service.
- Reliability improvements (reducing incidents or downtime).
- Developer productivity improvements (CI/CD, tooling).
- Features that significantly moved a business KPI.
---
### 2. Situation & Task: set context and stakes
Be concise but specific.
Example:
> "Our checkout service was experiencing high latency spikes during sales, leading to cart abandonment. At peak, p95 latency would exceed 3 seconds about 20% of the time. I was tasked with improving performance ahead of our Black Friday sale, with a target of keeping p95 under 1 second at 2x our normal peak traffic."
This sets:
- Problem: slow checkout.
- Baseline metric: p95=3s, spike frequency.
- Target: p95<1s at 2x traffic.
- Your role: responsible for the improvement.
---
### 3. Action: highlight your specific contributions
Break actions into 3–5 concrete bullets. Focus on what **you** did, not just what the team did.
Example:
> "I profiled the service using APM and found that 40% of time was spent in synchronous calls to the pricing service. I proposed and implemented a local caching layer for stable pricing data, and reworked our database access pattern from N+1 queries per cart to a single batched query. I added performance metrics and dashboards for end-to-end latency and cache hit rate. Finally, I designed a canary rollout and load test plan to validate improvements under realistic load."
Make sure each bullet is:
- Action-oriented ("profiled", "designed", "implemented", "coordinated").
- Attributable to you.
---
### 4. Result: quantify impact with numbers
This is the core of "STAR with quantification". Use before/after or rate-of-change metrics:
- Latency: p50/p90/p95/p99 differences.
- Error rates: from X% to Y%.
- Traffic: from N QPS to M QPS supported.
- Cost: infrastructure savings.
- Developer metrics: build time, deploy time, manual steps.
Continuing the example:
> "After rollout, our p95 latency under normal peak load dropped from ~3s to 700ms, and p99 from 6s to 1.4s. In load tests at 2.5x previous peak traffic, p95 stayed under 900ms. Cache hit rate stabilized around 85%, and DB queries per checkout dropped by ~60%. During Black Friday, we processed 2.2x more checkouts than the previous year with zero performance-related incidents. The business later reported a ~3% increase in conversion during that period, which they partially attributed to the improved checkout experience."
If you lack exact numbers, you can:
- Use **ranges** ("around 20–30% speedup").
- Use **relative terms** ("roughly cut in half").
- But make clear when it’s an estimate vs. measured.
---
### 5. Show how you measured or estimated impact
Interviewers want to see you think in terms of **measurement**.
Example:
> "We measured latency using Prometheus metrics and Grafana dashboards, comparing the week before and after the change under similar traffic patterns. For Black Friday, we used our load testing environment to simulate 2–3x traffic, then validated those numbers in production with real traffic. The conversion uplift came from our analytics team, and I cross-checked that the test period lined up with our deployment."
This shows:
- You care about experimental design.
- You don’t just throw around numbers without understanding where they came from.
---
### 6. Optional: tie to broader impact or learning
You can close with:
- How the work influenced other teams.
- How you reused the pattern.
- What you learned.
Example:
> "The caching pattern we introduced was later adopted by two other high-traffic services, and I documented it as a recommended practice in our internal wiki. I also learned the importance of building observability early; on my next project I added latency and error metrics before making any optimization changes."
---
### 7. Checklist for your answer
Before using your story in an interview, check:
- [ ] Can I state the initial problem with at least one number?
- [ ] Can I describe my actions in 3–5 clear, attributable bullets?
- [ ] Can I state the results with at least one **quantified** improvement?
- [ ] Do I know how those numbers were measured or estimated?
If yes, you have a strong STAR story that clearly demonstrates impact, which is particularly important for mid/senior-level roles.