Bias–Variance Trade-off and Reducing a Train–Test Performance Gap
Scenario
You are evaluating a supervised learning model and observe that training accuracy is significantly higher than test accuracy.
Question
-
Explain the bias–variance trade-off in supervised learning.
-
Your model performs significantly better on the training set than on the test set. What practical steps can you take to address this gap?
Hints
-
Relate variance to model complexity.
-
Consider regularization, cross-validation, simpler models, more data, early stopping, ensembling, and feature engineering.