Part A — A peer in Team B tells you they feel unwelcome because influential members of Team A openly exclude them from discussions that affect shared work. You have no formal authority over either team, and a launch is 3 weeks away.
1) Describe, step by step, how you would diagnose the situation in the first 48 hours (specific questions you ask, artifacts you review, signals you seek). 2) Lay out your stakeholder map and a concrete communication plan (who, when, channel, message goals). 3) Detail immediate risk mitigation to protect psychological safety and delivery timelines. 4) Define explicit decision criteria for when to escalate to HR or senior leadership. 5) Propose leading and lagging success metrics (quantitative and qualitative) to know the intervention is working, and how you would instrument them. 6) Identify two likely failure modes of your plan and how you would course-correct.
Part B — Describe a specific example (STAR) where you had ≤14 days to learn a new domain/tool to deliver a high-stakes result. What did you cut, how did you validate you were learning the right things, and how did you measure the impact? Include one mistake you made and what you would do differently.
Part C — Give a concrete example of a conflict with a senior stakeholder who disagreed with your analytic approach. How did you surface interests vs. positions, what trade-offs did you propose, and how did you quantify the impact of the final decision on business outcomes?
Quick Answer: This question evaluates interpersonal leadership, cross-team collaboration, conflict resolution, rapid domain learning, stakeholder mapping, and the ability to quantify analytic trade-offs and impact within a data science context.
Solution
# Part A — Cross-Team Inclusion and Delivery Under Time Pressure
Assumptions:
- You are a neutral cross-functional partner without formal authority (e.g., DS/TPM-like influence role) supporting a near-term launch.
- Goal: restore inclusive decision-making and protect delivery timelines without escalating prematurely.
## 1) First 48 hours: Diagnosis (questions, artifacts, signals)
0–6 hours: Listen, document, and scope
- 1:1 with the reporter (Team B peer)
- Questions:
- Can you walk me through the last 2–3 decisions where you were excluded (who, when, where, impact)?
- What was said/done that felt exclusionary (quotes/screenshots if available)?
- What’s the concrete impact on deliverables, dependencies, or risks?
- What does “success in 3 weeks” look like for you? What are you comfortable with me sharing and with whom?
- Artifacts to request: meeting invites, doc links/comments, Slack/Teams threads, decision logs, PRs/issues.
- Signals to note: repeated lack of invites, private channels deciding shared work, after-the-fact FYIs, dismissive comments, reassigning work without consent.
- Quickly inform your manager that you’re investigating a cross-team risk; agree on discretion and escalation boundaries.
6–24 hours: Triangulate neutrally
- 1:1 with Team A influencer(s) (PM/TL/EM)
- Questions:
- What are the top launch risks and critical path dependencies with Team B?
- How are decisions currently made (RACI/DRI)? Any confidentiality constraints?
- What’s been challenging in collaborating with Team B? What would good look like?
- Signals: role ambiguity, pressure to move fast, concerns about quality or reliability from Team B, “too many cooks” rationale.
- 1:1 with shared PM/TPM and both EMs (or closest equivalents)
- Questions:
- What’s the agreed RACI/DRI for decisions affecting shared work?
- What’s the launch checklist and go/no-go criteria? Where are we red/amber?
- Which decisions must be inclusive vs. can be local?
- Signals: missing RACI, undocumented decision forums, unclear ownership.
24–48 hours: Verify facts and patterns
- Review artifacts
- PRDs/tech/design docs and share settings, decision logs, roadmap/OKRs, issue tracker (Jira/etc.), code reviews, experiment plans, calendars (who is invited to what), Slack channels.
- Synthesize a short “current-state” summary
- What happened, when, who’s affected, impact on timeline/quality, clarity of roles, and immediate risks.
- Run a quick pattern check
- Is exclusion systematic (multiple decisions/people) vs. episodic? Any potential policy or code-of-conduct concerns?
## 2) Stakeholder map and communication plan
Stakeholder map (by role and interest/influence)
- High influence, high interest: Team A PM/TL/EM, shared PM/TPM, your manager.
- High influence, medium interest: Product leadership/Director(s) sponsoring launch.
- Medium influence, high interest: Team B DS (reporter), Team B EM/PM.
- Advisory: Legal/Policy/People Partner (only if needed), QA/Release manager.
Communication plan (who, when, channel, goal)
- Reporter (Team B DS):
- When: Day 0 and Day 2 check-in.
- Channel: 1:1.
- Goals: Hear concerns, confirm facts, align on what can be shared, agree on success criteria and boundaries.
- Team A influencers (PM/TL/EM):
- When: Day 1 1:1s; Day 2 joint working session.
- Channel: 1:1 then facilitated group meeting.
- Goals: Align on decision governance (RACI/DRI), inclusive forums, immediate process fixes that don’t slow the launch.
- Shared PM/TPM + both EMs:
- When: Day 2 joint working session; then 15-min daily triage until stable.
- Channel: 30–45 min live working session; shared doc; daily stand-up/triage.
- Goals: Confirm RACI, define decision forums, create a single shared plan and risk log, assign DRIs and SLAs.
- Broader teams:
- When: After agreement, post a short update.
- Channel: Shared Slack channel + doc.
- Goals: Publish clear RACI, inclusive decision forums, SLAs, escalation path, and commitments.
Message themes (concise, neutral, outcome-focused)
- We need a predictable and inclusive path to decisions that affect shared work.
- Propose: a shared channel, a decision log, explicit DRIs, and a short daily triage until launch.
- Commitments: add all relevant stakeholders to forums; summarize decisions in writing; 24-hour SLA for cross-team questions.
## 3) Immediate risk mitigation (psychological safety and timelines)
Psychological safety
- Establish a shared, open channel for project decisions; disable private decision-making for shared work.
- Create a decision log (date, DRI, context, options, decision, dissent).
- Set meeting norms: agenda circulated 24h prior; round-robin input; rotate facilitator; capture dissent explicitly.
- Offer 1:1 and anonymous input (quick form) for those uncomfortable speaking up.
Delivery timelines
- Daily 15-min cross-team triage on critical path, blockers, and risks (R/A/G status).
- Clarify DRIs and SLAs: code reviews, data requests, experiment approvals, doc reviews.
- Freeze scope creep: require change proposals to go through the decision log.
- Build fallback plans: feature flags, staged rollout, guardrail metrics and automatic rollback criteria.
## 4) Explicit escalation criteria
Escalate to HR immediately if
- Allegations of harassment, discrimination, or retaliation; personal attacks or slurs.
- Psychological harm or safety concerns.
Escalate to senior leadership (product/eng) if
- After 48–72 hours with agreed changes, exclusion persists in 2+ additional decisions affecting shared work.
- Team A (or B) refuses to adopt basic inclusive governance (shared channel, decision log, RACI) that directly endangers launch.
- Material launch risk (e.g., critical path blockers >3 days, repeatedly missed SLAs) without viable mitigation.
Documentation for escalation
- Timeline of incidents, artifacts/screenshots, documented asks and responses, impact on deliverables, and proposed remedies attempted.
## 5) Success metrics and instrumentation
Leading indicators (weekly)
- Inclusion coverage: % of decisions in the log with both teams represented (target ≥90%).
- Response SLAs: median response time to cross-team questions (target ≤24h; ideally ≤4h during business hours).
- Participation: number of cross-team comments on shared docs/PRs; cross-team reviewers per PR (target ≥2 total, with ≥1 from each team).
- Pulse safety: 2-question anonymous check-in (1–5) on “I feel included in decisions affecting my work” and “I can speak up without negative consequences” (target +1 point from baseline).
- Risk burndown: active blockers and open risks trending down week over week.
Lagging indicators
- Launch timeliness: met vs. slipped; if slipped, slip days attributable to collaboration issues.
- Quality: post-launch incidents/regressions attributable to cross-team misalignment; rework hours.
- Sustained sentiment: monthly team climate pulse; 360 feedback mentions of collaboration.
Instrumentation
- Calendar analytics: check invite lists for decision meetings vs. stakeholder list.
- Slack/Docs: shared channel adoption; doc access lists; comment counts; decision log completeness.
- Issue tracker: cycle time, SLA adherence, cross-team review counts.
- Quick anonymous survey (e.g., form with timestamp and team) stored in a sheet for trend tracking.
## 6) Likely failure modes and course-corrections
Failure mode 1: Process feels punitive; Team A becomes defensive and works more privately.
- Course-correct: Reframe around launch risk and outcome success, not blame. Reduce ceremony (lighter templates), highlight quick wins, and acknowledge Team A’s constraints.
Failure mode 2: Added process slows delivery.
- Course-correct: Timebox meetings, move to async updates, narrow decision forums to only required roles, and set a “default to proceed” with published dissent when risk is low.
# Part B — Rapid Learning Under Deadline (STAR)
Example STAR: Geo-experimentation to measure incrementality in ≤14 days
- Situation: Two weeks before a major budget decision, leadership needed a confident read on the incremental impact of a large marketing campaign where user-level attribution was unreliable. No one on my team had run geo-experiments recently.
- Task: Learn and implement a pragmatic geo-experiment (market-level randomized test) to inform a multi-million-dollar spend decision before the finance gate.
- Actions:
- Scoping and cuts (Day 1–2):
- Cut: building a general-purpose library, advanced Bayesian structural time series, and fancy dashboards.
- Focus: matched-market randomization with difference-in-differences (DiD), pre-trend checks, power analysis, and a simple readout.
- Rapid learning plan (Day 1–3):
- 2 expert 30-min consults; read 2 canonical papers/guides on geo-lift and DiD; drafted a 2-page design doc (assumptions, risks, analysis plan).
- Prototype and validation (Day 3–6):
- Built a Python notebook to: match markets on history and seasonality, run power sims, perform DiD with placebo tests, and compute confidence intervals.
- Back-tested on 3 prior campaigns to validate bias/variance and calibrate expected lift error.
- Pilot and execution (Day 7–12):
- Ran a 10-day test on 10 matched market pairs (treatment/control) with pre-registered guardrails.
- Daily monitoring with pre-set stop conditions (e.g., severe divergences, external shocks).
- Readout (Day 13–14):
- Produced a 1-page executive summary with lift estimate, 95% CI, sensitivity analyses, and clear recommendation.
- Results:
- The measured incremental lift was 6% (95% CI: 2–10%), substantially lower than the 20% implied by multi-touch attribution.
- Leadership reallocated budget, avoiding over-spend and improving expected ROI. We also adopted the design as a template for future geo-tests.
- Mistake and what I’d do differently:
- Mistake: Initial power analysis underestimated variance due to regional shocks, resulting in a borderline-wide CI.
- Fix: Increase market pairs and extend duration for future tests; add synthetic controls as a robustness check. I also added a pre-registered external events log to flag shocks.
- How I validated I was learning the right things:
- Expert reviews on the design doc; success criteria agreed up front (minimum detectable effect, CI width, decision threshold).
- Back-tests with prior campaigns to check calibration; placebo tests to validate parallel trends.
- How I measured impact:
- Quantified delta between attribution-based and incremental lift; tied to dollars avoided/redirected.
- Secondary: time-to-decision met the 14-day gate; methodology reused by two other teams within a quarter.
# Part C — Conflict With a Senior Stakeholder Over Analytic Approach
Example: Incrementality vs. attribution for spend decision
- Situation: A senior marketing leader wanted to scale a campaign based on last-click attribution showing ~20% lift. I disagreed, arguing we needed an incrementality test to avoid overestimating impact.
- Task: Reach a decision that balanced speed and confidence without jeopardizing quarterly revenue.
- Actions:
- Surface interests vs. positions:
- Position (stakeholder): “Scale now; attribution says 20%.” Interest: hit revenue targets quickly, minimize risk of under-spending.
- My position: “Run an experiment first.” Interest: make a confident, defensible decision; avoid wasted spend and false positives.
- Trade-offs proposed:
- Rapid compromise: a 2-week geo-experiment on 10 matched pairs covering ~15% of spend while maintaining the remaining 85% status quo.
- Calibrate: use experimental lift to adjust the attribution model going forward.
- Guardrails: pre-registered KPIs, stop-loss thresholds, and executive-aligned decision rules.
- Execution:
- Designed DiD with pre-trend checks; daily monitoring; transparent updates to the stakeholder.
- Results and quantified impact:
- Experimental lift: ~6% (95% CI: 2–10%), not 20%.
- Decision: proceed with a smaller scale-up and reallocate part of the budget to higher-ROI channels.
- Estimated impact: avoided several million dollars in low-ROI spend over the quarter; improved blended ROI by ~10–15% relative to the proposed plan.
- Why this worked:
- Addressed the stakeholder’s interest in speed via a bounded test; preserved decision quality with an objective experiment.
- Provided a calibration factor to bridge attribution and incrementality, improving future decision-making.
Notes on rigor and pitfalls addressed
- DiD assumptions: checked parallel trends with placebo tests and visual inspection; added robustness via sensitivity analysis.
- Risk guardrails: defined stop conditions and communicated uncertainty explicitly; pre-registered analysis to avoid p-hacking.