This question evaluates understanding of ensemble machine learning methods—specifically differences between Random Forests and gradient-boosted decision trees—and the impact of feature preprocessing on tree-based models.

A product/data science team is deciding between Random Forests and Gradient-Boosted Decision Trees (e.g., XGBoost) for a new predictive task. They also want to know whether they must standardize or normalize features for these tree-based models.
Compare Random Forests and Gradient-Boosted Decision Trees in terms of:
Then answer: Is feature standardization or normalization necessary for tree-based models? Explain why or why not.
Login required