Explain core ML and DL fundamentals
Company: DRW
Role: Machine Learning Engineer
Category: Machine Learning
Difficulty: medium
Interview Round: Take-home Project
Answer the following ML/DL concept questions:
- PCA: What do the eigenvectors of the covariance matrix represent, and how do they relate to principal components and explained variance?
- Decision trees: Define Gini impurity, show how to compute it for a node, and explain how it is used to choose splits.
- Reinforcement learning: Write the Bellman optimality equation (for V* or Q*) and explain its role in policy evaluation and improvement.
- Regularization: What is dropout, how does it behave at training vs. inference time, and why does it act as a regularizer?
- Training stability: What is gradient clipping, when is it useful, and how do residual connections in ResNets help mitigate vanishing gradients?
- Optimization landscape: Why are deep learning objectives typically non-convex, and what are the practical implications for optimization (e.g., local minima vs. saddle points)?
- Transformers: Describe scaled dot‑product attention and explain why the dot products are scaled by 1/sqrt(d_k).
Quick Answer: This question evaluates core ML and DL fundamentals — covering dimensionality reduction (PCA), decision tree impurity measures, reinforcement learning Bellman equations, regularization such as dropout, training-stability techniques, optimization landscape concepts, and transformer attention — measuring theoretical knowledge of algorithms and training dynamics. Commonly asked in the Machine Learning domain to assess foundational theory and the ability to reason about model behavior, trade-offs, and practical implications, it tests both conceptual understanding and practical application.