ML Theory Check: PCA, Trees, RL, Regularization, Optimization, and Transformers
Context: Provide concise, technically correct explanations suitable for a machine-learning engineer take-home. Use formulas and brief examples where helpful.
Tasks
-
PCA: Interpret the eigenvectors (and eigenvalues) of the covariance matrix.
-
Decision trees: Define Gini impurity and explain how it is used to choose splits.
-
Reinforcement learning: Write the Bellman equation(s) and explain their role.
-
Neural networks: Explain dropout as a regularization technique.
-
Training stability: Describe gradient clipping, the vanishing-gradient problem, and how ResNets help.
-
Optimization landscape: Why are many deep-learning loss surfaces non-convex?
-
Transformers: Explain attention mechanisms and what “scaling” means in this context (both the scaled dot product and computational scaling with sequence length).