Scenario Product team wants to understand pros/cons of Random Forests versus Boosted Decision Trees and whether any feature scaling is required for these tree-based algorithms. Question Compare Random Forests and Gradient-Boosted Decision Trees (e.g., XGBoost) in terms of bias/variance, interpretability, training speed, and robustness to overfitting. Is feature standardization or normalization necessary for tree-based models? Explain why or why not. Hints Focus on ensemble construction, sequential vs. parallel learning, split criteria, and how trees handle monotonic transformations.