PracHub
QuestionsPremiumLearningGuidesCheatsheetNEWCoaches
|Home/Behavioral & Leadership/Google

Learning from Wrong Data

Last updated: Mar 29, 2026

Quick Overview

This question evaluates a product manager's data literacy, decision-making under uncertainty, accountability, and leadership when a significant decision was based on incorrect or misleading data, and it falls under the Behavioral & Leadership category within product management.

  • medium
  • Google
  • Behavioral & Leadership
  • Product Manager

Learning from Wrong Data

Company: Google

Role: Product Manager

Category: Behavioral & Leadership

Difficulty: medium

Interview Round: Onsite

##### Question Tell me about a time you made a significant decision based on incorrect or misleading data. What happened, how did you uncover the issue, what was the impact, and what safeguards did you implement afterward?

Quick Answer: This question evaluates a product manager's data literacy, decision-making under uncertainty, accountability, and leadership when a significant decision was based on incorrect or misleading data, and it falls under the Behavioral & Leadership category within product management.

Solution

## How to Approach - Use STAR structure: be specific, quantify, own the mistake, and highlight learnings and systemic improvements. - Show strong product judgment: validate data, triangulate sources, apply guardrails, favor experiments for causal claims. - Keep it concise but concrete: 60–90 seconds of setup; the rest on detection, impact, and safeguards. ## Example Answer (PM Context) Situation - I owned monetization for a consumer product. Our weekly dashboard showed a 15% drop in ROAS (Return on Ad Spend) for a large cohort. ROAS = Revenue / Cost. - The dashboard indicated ROAS fell from 2.0 to 1.7 (−15%). Task - Protect margin quickly. I decided to pause a set of long‑tail campaigns that looked least efficient. Action - I paused ~20% of those campaigns and reallocated budget toward better performers. - Two days later, I noticed total daily revenue fell about 3% without the expected improvement in margin. - I initiated a reconciliation: compared our warehouse metrics with source platform totals and finance billing data. Discovery - We found the ETL had been updated to join a new cost adjustment table. Some ad groups had multiple cost rows per day (late-adjusted costs). The join duplicated cost records, inflating cost and depressing ROAS. - Quick check: In the affected cohort, platform-reported cost was $500k, but our warehouse showed ~$1M. With revenue stable at ~$1M, true ROAS ≈ 1.0M/0.5M = 2.0, not the 1.7 we saw. Impact - We paused viable campaigns for ~5 days: - Short-term: ~3% revenue dip for the week; some ramp-up friction to restore spend. - Team time: engineering + analytics spent 2 days on root cause and backfill. - I owned the decision, published a postmortem, and restored campaigns with a controlled ramp. Safeguards Implemented 1. Data quality and reconciliation - Primary-key and duplication checks on joins (assert one-to-one or one-to-many with bounded multiplicity). - Daily reconciliation to platform control totals with a 1% tolerance; alert if exceeded. - Freshness and null-rate monitors on critical fields (campaign_id, date, cost, revenue). 2. Decision guardrails - For material spend changes: require either (a) triangulation from two independent sources or (b) an experiment/holdout. - Introduced a stop-loss: automatic rollback if revenue or margin deviates >2% intraday without causal explanation. - Canary changes with a 10% traffic slice before global rollout. 3. Experimentation hygiene - When feasible, use randomized holdouts and guardrail metrics (e.g., conversion rate, refund rate) with sequential monitoring. - Stratified analysis by channel/geo to avoid Simpson’s paradox. 4. Process and accountability - Data contract for cost tables (unique keys, allowed late-arrival behavior) and code review checklist for metric changes. - Decision log capturing data sources used, assumptions, and pre-agreed reversal criteria. Result - Corrected ROAS returned to ~2.0; restored campaigns recovered revenue. - The new checks later caught a separate undercount within hours, preventing another bad decision. ## Why This Works - It shows ownership, fast detection, measured rollback, and systemic fixes. - It demonstrates PM judgment: don’t act on a single dashboard; reconcile and experiment for causal decisions. ## Small Numeric Illustration (Teaching Aid) - Suppose revenue = $1,000,000 and true cost = $500,000 → ROAS = 2.0. - A join duplicates cost rows to $1,000,000 → ROAS appears to be 1.0 (−50%). - Even a smaller duplication (true cost $500k, reported $588k) yields ROAS 1.7 (≈−15%), enough to trigger an overreaction. ## Pitfalls to Call Out - Causation vs correlation: don’t reallocate spend based on correlational drops without a holdout. - Aggregates hide heterogeneity: do stratified cuts (channel, geo, device) to avoid Simpson’s paradox. - Data freshness and backfills: late-arriving data can temporarily mislead; label dashboards with freshness SLAs. ## Template You Can Reuse - Situation/Task: We saw metric X drop/improve by Y% and made decision Z to protect/capture value. - Action: Specific steps you took and why the data seemed credible at the time. - Discovery: The exact flaw (instrumentation bug, duplication, attribution window, timezone, sampling bias) and how you detected it (reconciliation, A/A test, segment analysis). - Impact: Quantify user, revenue, or time cost. - Safeguards: Data quality checks, decision guardrails, experimentation standards, process changes, and a concrete example of those safeguards catching an issue later.

Related Interview Questions

  • Explain Your Most Technically Complex Project - Google (medium)
  • Describe teamwork and personal achievements - Google (medium)
  • Describe Key Behavioral Examples - Google (medium)
  • Handle two teams duplicating work - Google (hard)
  • Handle Teammate Who Feels Pressured - Google
Google logo
Google
Jul 4, 2025, 8:28 PM
Product Manager
Onsite
Behavioral & Leadership
14
0

Behavioral: Decision-Making With Bad or Misleading Data

You made a significant decision that was based on incorrect or misleading data.

Address the following:

  1. What was the situation and the decision you made?
  2. How did you uncover that the data was wrong or misleading?
  3. What was the impact of the mistake (on users, metrics, timeline, revenue, etc.)?
  4. What safeguards did you implement afterward to prevent recurrence?

Tip: Use a structured story (e.g., STAR: Situation, Task, Action, Result). Quantify impact and be explicit about what you learned and the systemic fixes you put in place.

Solution

Show

Comments (0)

Sign in to leave a comment

Loading comments...

Browse More Questions

More Behavioral & Leadership•More Google•More Product Manager•Google Product Manager•Google Behavioral & Leadership•Product Manager Behavioral & Leadership
PracHub

Master your tech interviews with 7,500+ real questions from top companies.

Product

  • Questions
  • Learning Tracks
  • Interview Guides
  • Resources
  • Premium
  • For Universities
  • Student Access

Browse

  • By Company
  • By Role
  • By Category
  • Topic Hubs
  • SQL Questions
  • Compare Platforms
  • Discord Community

Support

  • support@prachub.com
  • (916) 541-4762

Legal

  • Privacy Policy
  • Terms of Service
  • About Us

© 2026 PracHub. All rights reserved.