Predictive Model Deep-Dive (End-to-End)
Pick one predictive model you know deeply (e.g., logistic regression, gradient-boosted trees, transformer classifier) and explain how it works end-to-end for a real problem you solved.
(a) Objective, loss, inductive biases/assumptions
-
State the business objective and the model's training objective and loss function.
-
Explain the model’s inductive biases/assumptions and when they are violated in practice.
(b) Features and validation
-
Describe your feature engineering, including handling of categorical/high-cardinality features and time-based aggregates.
-
Explain your validation strategy (i.i.d. vs. time-based splits), how you prevented leakage, and how you confirmed stationarity.
(c) Training
-
Walk through hyperparameter search, regularization, early stopping, and handling class imbalance (weights, focal loss, resampling). Justify choices quantitatively.
(d) Training/inference issues (pick three)
-
Detail three concrete issues you encountered (e.g., covariate shift, label noise, calibration drift, offline/online feature skew, latency/throughput limits).
-
For each, explain how you detected, diagnosed, and fixed it (checks/plots/metrics).
(e) Evaluation beyond ROC/PR
-
Discuss calibration, cost-sensitive metrics, business KPIs, and how you translated model lift into expected value.
(f) Fairness, privacy, and monitoring
-
Describe fairness and privacy considerations.
-
Outline post-deployment monitoring: drift detection thresholds, alerting, rollback criteria, and canarying.