PracHub
QuestionsPremiumLearningGuidesCheatsheetNEWCoaches
|Home/Behavioral & Leadership/Uber

Diagnose and reduce first-action drop-offs

Last updated: Mar 29, 2026

Quick Overview

This question evaluates data-driven program leadership skills, including instrumentation and analytics, process and incentive design, fairness and anti-gaming controls, capacity planning, and cross-functional ownership with SLAs.

  • hard
  • Uber
  • Behavioral & Leadership
  • Data Scientist

Diagnose and reduce first-action drop-offs

Company: Uber

Role: Data Scientist

Category: Behavioral & Leadership

Difficulty: hard

Interview Round: Onsite

You lead Rivert's program where candidates must pass paperwork and then complete a first critical action to qualify. Many finish the paperwork but never complete that first action, wasting reviewer time and candidate effort. With a 14-day deadline from application and reviewer capacity of 300 files/week, what concrete leadership actions will you take to: (a) instrument the funnel to identify top 3 drop-off causes within one week, (b) change incentives/process so ≥60% of paperwork-complete candidates complete the first action within 14 days, (c) preserve fairness and prevent gaming (e.g., low-quality first actions to qualify), and (d) assign clear ownership, SLAs, and escalation paths across teams? Specify success metrics, governance rituals, and how you would decide between speeding the process vs. offering perks.

Quick Answer: This question evaluates data-driven program leadership skills, including instrumentation and analytics, process and incentive design, fairness and anti-gaming controls, capacity planning, and cross-functional ownership with SLAs.

Solution

## Goals and Success Metrics - North Star: ≥60% of paperwork‑complete candidates complete the first action within 14 days of application (P14 conversion). - Quality‑adjusted goal: ≥55% quality‑approved first actions within 14 days (QA‑P14), where quality uses a clear rubric. - Secondary metrics: - Time to First Action (TTFA): median and p90 from paperwork completion → first action completion. - Reviewer throughput and SLA compliance (95% of files reviewed within 48 hours). - Backlog health: open reviews ≤ (300/week × 1 week) to maintain cycle time. - Fairness guardrails: segment gaps in QA‑P14 within ±5 percentage points; audit false positive/negative rates similar across segments. - Gaming guardrails: low‑quality attempt rate ≤5%; random audit fail rate ≤2%. Definitions: - P14 = (# candidates who complete first action by day 14 since application) / (# candidates who completed paperwork). - QA‑P14 = (# candidates with quality‑approved first action by day 14) / (# candidates who completed paperwork). ## (a) Instrument the Funnel in 1 Week to Find Top 3 Drop‑Off Causes Funnel states (add events with timestamps): 1) application_submitted 2) paperwork_submitted 3) paperwork_approved (include reviewer_id, decision_time, reason_if_rejected) 4) first_action_started 5) first_action_completed 6) first_action_quality_decision (pass/fail + rubric codes) Context and friction signals to log: - Channel/geo/device/language, time windows (availability), notifications sent/opened/clicked, schedule availability, UI errors, support contacts, payment/equipment blockers, reschedules/cancellations. One‑week execution plan: - Day 0–1: Event spec, owner assignment (Data/Eng/Ops), create new quality rubric codes for first action (≤10 mutually exclusive categories to start), add micro‑survey for drop‑outs (1–2 questions with a reason picklist + free text). - Day 1–3: Ship event logging and micro‑survey/interceptor on key screens (post‑paperwork and before deadline). Create a daily funnel dashboard with survival curves (time‑to‑first‑action) and Pareto of drop‑off reasons. Backfill last 4 weeks if possible. - Day 3–5: Qual + quant triangulation: - 10–15 same‑day user calls across segments (fast sample). - Analyze support tickets and free‑text with simple keyword tagging. - Fit a quick logistic regression / SHAP on conversion within 14 days to rank correlates (availability lag, scheduling friction, equipment/payment barrier, communication failures, review delay, etc.). - Day 6–7: Publish top 3 root causes with quantified impact (e.g., 35% cite "no appointment availability within 72h,” 22% "cost barrier,” 17% "unclear instructions/failed quality on first try"). Define experiments for each cause. Analytics to use: - Survival analysis: hazard of first action over time; identify time windows with steep drop‑off (e.g., after day 3). - Cohort charts by paperwork week to see trend vs. seasonality. - Pareto of cause codes from survey + ticket tags; sanity check with behavioral events. Pitfalls and guardrails: - Separate correlation from causation; verify with quick A/B or holdouts. - Ensure event IDs remain stable across platforms; debounce duplicate events. - Bias in survey responses; weight by response propensity if available. ## (b) Change Incentives/Process to Reach ≥60% Within 14 Days Address the top causes with targeted interventions while respecting reviewer capacity (300/week): Process speedups (preferred first, usually cheaper and durable): - Auto‑schedule: Upon paperwork approval, auto‑book the earliest first‑action slot within 72 hours; candidate can reschedule in‑app. - Fast lane capacity: Reserve a daily buffer (e.g., 20% of slots) to ensure availability for newly approved candidates; dynamically expand if backlog < threshold. - Instructional clarity: Add a 60‑second walkthrough + checklist; confirm readiness via a 3‑item pre‑flight (blocks attempts if prerequisites missing). - One‑tap start: Reduce clicks from 5→2, prefill fields, show time estimate, and surface live support. - SLA on paperwork: 95% approval within 48h (adds predictability to scheduling). Targeted incentives (cost‑controlled): - Time‑bounded nudge ladder (automation): Day 0 immediate confirmation + auto‑schedule; Day 1 SMS with calendar link; Day 3 reminder; Day 5 "2‑day deadline approaching"; Day 10 final call. Use personalization (best time of day, language). - Commitment device: Let candidates pick a slot within 24h of paperwork completion; missed slots trigger escalation and personalized reschedule. - Small completion bonus or fee waiver only for high‑propensity segments with cost barriers (identified via model/eligibility rules), capped budget with weekly ROI review. Capacity alignment using Little’s Law: - Little’s Law: WIP = Throughput × Cycle Time. With 300 reviews/week capacity and 1 week cycle time target, keep backlog ≤300 to sustain 48h SLA. - If weekly paperwork approvals exceed capacity, throttle auto‑scheduling or add surge reviewers; otherwise speed changes may create queues and hurt conversion. Example impact sizing: - If auto‑scheduling increases show‑ups by +10 pp and clarity checklist reduces failed attempts by +5 pp, combined net could move P14 from 40%→55%. - Add a targeted $X incentive to cost‑constrained group (20% of population) that yields +8 pp in that group → +1.6 pp overall; total 56.6%→63%. Validation plan: - Staggered rollout by cohorts or geo; measure QA‑P14, TTFA, rework rate. Use intent‑to‑treat to avoid survivorship bias. ## (c) Preserve Fairness and Prevent Gaming Define quality and enforce it: - Quality rubric for first action: objective criteria (duration thresholds, required artifacts, geo/time plausibility, no duplicate/SPAM patterns). Publish rubric to candidates. - Automated checks + random audits: 100% automated checks; 10% random human review across all segments; extra 100% audit for new/changed flows in first 2 weeks. - Rework policy: 1 allowed retry with guidance; same standards across all segments. Anti‑gaming signals: - Flag short durations, impossible sequences, repeated identical uploads, shared device fingerprints, or unusual reschedule patterns. Maintain a ruleset with thresholds and a weekly review. Fairness safeguards: - Monitor QA‑P14, audit fail rate, and false‑positive flags by segment (region, device, language). Trigger investigation if any gap >5 pp. - Use counterfactual modeling (propensity‑adjusted comparisons) to ensure incentives or fast lane don’t disproportionately disadvantage any group. - Keep identical quality thresholds; avoid perks tied to protected attributes; base eligibility on behavioral signals (e.g., showed up to scheduled slot) or need (verified cost barrier). ## (d) Ownership, SLAs, and Escalation Paths RACI and owners: - Product: Owns funnel UX, auto‑scheduling, and incentive design. Accountable for P14. - Data Science: Measurement, root‑cause, experiment design, fairness monitoring. Accountable for analysis quality and guardrails. - Engineering: Events, services, scheduling infra, and anti‑gaming automation. Accountable for reliability and latency. - Operations (Reviewers): Capacity planning, QA rubric execution, audits. Accountable for SLAs and backlog. - Risk/Compliance: Defines prohibited behaviors, approves rules and audits. - Support/Comms: Messaging, translations, escalation playbooks. - Exec Sponsor: Unblocks resources; chairs weekly business review. SLAs: - Paperwork review: 95% within 48h; 99% within 72h. - Auto‑schedule: 90% of approved candidates get a slot within 72h. - Candidate comms: 95% of inquiries answered within 24h. - Data freshness: Funnel dashboard updated hourly; experiment reports daily. Escalation paths: - If backlog >300 or SLA breach >2 days: Ops lead pages on‑call; add surge reviewers within 48h or throttle approvals. - If QA‑P14 drops >5 pp week‑over‑week: Pause new experiments, roll back last change, convene incident review within 24h. - If fairness gap >5 pp: Freeze incentive targeting until mitigation plan approved by Risk. ## Governance Rituals - Daily (for first 4 weeks): 15‑minute stand‑up on P14, TTFA, backlog, and incidents. - Weekly business review: Experiment readouts, Pareto of drop‑offs, capacity vs. demand, fairness/gaming dashboard, decision log. - Monthly audit: Random sample re‑review, rubric calibration, and post‑hoc fairness analysis. - Experiment governance: Pre‑registered success metrics and stop‑loss rules; change log with owners and rollout plans. ## Deciding Between Speed vs. Perks Set up a 2×2 test (Speed on/off × Perk on/off) and choose based on cost‑per‑quality‑completion and fairness. Decision metric: - Incremental Quality‑Weighted Completions (iQWC) = Δ(QA‑P14 × cohort size). - Cost per iQWC = (Operational + Incentive + Eng amortized cost) / iQWC. - Choose the arm with lower cost per iQWC, subject to capacity and fairness constraints. Small numeric example: - Baseline: P14 = 45%, QA‑P14 = 42% on a 1,000‑person cohort → 420 QA completions. - Speed only: QA‑P14 +8 pp → 500; +80 iQWC; cost: +$3k surge reviewers → $37.5/iQWC. - Perk only: QA‑P14 +5 pp → 470; +50 iQWC; cost: $5k incentives → $100/iQWC. - Combo: QA‑P14 +12 pp → 540; +120 iQWC; cost: $7k → $58.3/iQWC. - Pick "Speed only" first (best unit economics), monitor capacity; layer targeted perks if still short of 60%. Principle: - Prioritize process speed and clarity (systemic, compounding benefits). Use perks narrowly to overcome verifiable cost barriers. Always check for induced gaming and fairness impacts. ## 30‑Day Outcome Targets - Achieve ≥60% P14 and ≥55% QA‑P14. - TTFA median ≤3 days; p90 ≤6 days. - Reviewer SLA ≥95% on time; backlog ≤300. - Fairness gap ≤5 pp; audit fail rate ≤2%. This plan delivers rapid diagnosis (within one week), targeted fixes, and durable governance to raise conversion while protecting quality, fairness, and operational health.

Related Interview Questions

  • Describe a Trade-off Design Change - Uber
  • Describe ownership and failure - Uber (medium)
  • Answer Common Behavioral Questions - Uber (medium)
  • How do you manage performance and disagreements? - Uber (medium)
  • Describe an ML system you built - Uber (medium)
Uber logo
Uber
Oct 13, 2025, 9:49 PM
Data Scientist
Onsite
Behavioral & Leadership
2
0

Funnel Drop‑Off: Instrumentation, Incentives, Fairness, and Ownership

Context

You lead a program where candidates must: (1) submit paperwork, then (2) complete a first critical action to qualify. Many candidates finish paperwork but never complete the first action, wasting reviewer time and candidate effort. There is a 14‑day deadline from application to complete the first action. Reviewer capacity is 300 files per week.

Task

What concrete leadership actions will you take to:

(a) Instrument the funnel to identify the top 3 drop‑off causes within one week.

(b) Change incentives and process so that at least 60% of paperwork‑complete candidates complete the first action within 14 days.

(c) Preserve fairness and prevent gaming (e.g., low‑quality first actions to qualify).

(d) Assign clear ownership, SLAs, and escalation paths across teams.

Also specify:

  • Success metrics (primary and guardrails).
  • Governance rituals (cadence, reviews, audits).
  • How you would decide between speeding the process vs. offering perks.

Solution

Show

Comments (0)

Sign in to leave a comment

Loading comments...

Browse More Questions

More Behavioral & Leadership•More Uber•More Data Scientist•Uber Data Scientist•Uber Behavioral & Leadership•Data Scientist Behavioral & Leadership
PracHub

Master your tech interviews with 7,500+ real questions from top companies.

Product

  • Questions
  • Learning Tracks
  • Interview Guides
  • Resources
  • Premium
  • For Universities
  • Student Access

Browse

  • By Company
  • By Role
  • By Category
  • Topic Hubs
  • SQL Questions
  • Compare Platforms
  • Discord Community

Support

  • support@prachub.com
  • (916) 541-4762

Legal

  • Privacy Policy
  • Terms of Service
  • About Us

© 2026 PracHub. All rights reserved.