Pick one of your production ML projects and walk through it end-to-end. Be specific: 1) Problem framing (prediction vs causal decisioning), target definition, and how you prevented label leakage; 2) Data sources, sampling window, and offline metric(s) with rationale (e.g., AUC vs calibration/Brier for monetization); 3) Feature engineering, handling sparse/categorical signals, and how you enforced privacy/fairness constraints; 4) Model choices and tradeoffs (e.g., XGBoost vs shallow nets vs GLM), hyperparameter strategy, and ablations you ran; 5) Error analysis and post-deployment monitoring (drift, stability, guardrail metrics); 6) How you translated model lifts into product impact without an A/B test (e.g., causal uplift modeling, CUPED, backtests); 7) What you would change on a v2 if given twice the data or stricter latency limits.