You inherit an ML pipeline that predicts next-7-day churn for users, but data quality is inconsistent and feature drift is suspected. A) Propose an end-to-end pipeline design covering: temporal data slicing (label window vs feature window), leakage controls (e.g., using only information available up to prediction time), cross-validation scheme appropriate for time series, and a feature store strategy that guarantees training/serving consistency. B) Define offline metrics (e.g., AUC, PR-AUC, calibration error) and business metrics (e.g., uplift in retention from targeted interventions). Specify how you would threshold scores to optimize a cost-sensitive objective with asymmetric costs. C) Describe concrete data quality and drift monitors: missingness rates, schema checks, training-serving skew, and feature drift using PSI/JS divergence with alert thresholds (e.g., PSI > 0.25 severe). Include how to separate drift in covariates from drift in the target due to product changes. D) Detail an online rollout plan: canary scoring, shadow mode, real-time monitoring, rollback triggers, and retraining cadence. Define explicit retraining triggers (e.g., weekly if PSI moderate for two consecutive weeks or business KPI degrades by X%). Address fairness checks across at least two sensitive cohorts and how you would mitigate disparities.