This question evaluates a candidate's understanding of regularization techniques in supervised learning, covering the geometric intuition behind L1 versus L2 penalties, sparsity and bias–variance trade-offs, behavior under multicollinearity and elastic net considerations, algorithmic solution differences, and the penalized logistic regression objective and tuning. It is commonly asked in the Machine Learning domain to assess both conceptual understanding and practical application of model regularization, numerical stability, and reproducible hyperparameter selection, and the expected level spans conceptual reasoning and implementation-aware details.
Context: You are comparing L2 (Ridge) and L1 (Lasso) regularization for linear and logistic regression. Assume predictors are columns of X, response is y, and the intercept is not penalized unless stated. We write optimization in penalized form (minimize loss + penalty).
(a) Shrinkage geometry and sparsity
(b) Bias of penalized estimates
(c) Multicollinearity
(d) Algorithms and library differences
(e) Logistic regression with L2
Login required