This question evaluates competency in model evaluation metrics (ROC AUC and Average Precision), handling class imbalance, choice of output activations and loss functions, robustness to outliers (MSE vs MAE), ensemble methods, and overfitting diagnostics within the Machine Learning domain.
You are given model scores and binary labels for a small dataset and asked to compute ROC AUC manually, then answer modeling and evaluation questions.
Given:
Answer all sub-questions precisely:
For each scenario, choose an output-layer activation and loss, and justify:
Also discuss vanishing gradients for sigmoid/tanh and why leaky-ReLU or GELU might help in hidden layers.
Explain optimization and robustness differences: gradients, influence of outliers, and mean vs median optimality.
Contrast bagging vs boosting in terms of bias/variance and when you’d choose each for noisy data.
Name two concrete, testable diagnostics (with plots/metrics) and two mitigation tactics that won’t leak validation information.
Login required