PracHub
QuestionsPremiumLearningGuidesCheatsheetNEWCoaches
|Home/Machine Learning/Uber

Explain XGBoost depth, regularization, and dropout

Last updated: Mar 29, 2026

Quick Overview

This question evaluates understanding of model complexity and regularization across gradient-boosted decision trees and neural networks—covering how tree depth impacts bias/variance and computational cost, how L1/L2 and weight decay modify objectives and learned parameters, dropout behavior at inference, and distinctions between training and inference phases in Machine Learning. It is commonly asked to assess reasoning about overfitting, generalization, computational and deployment trade-offs, testing domain knowledge in supervised learning and regularization, and requires primarily conceptual understanding with practical-application considerations.

  • medium
  • Uber
  • Machine Learning
  • Machine Learning Engineer

Explain XGBoost depth, regularization, and dropout

Company: Uber

Role: Machine Learning Engineer

Category: Machine Learning

Difficulty: medium

Interview Round: Onsite

Answer the following ML conceptual questions: (a) In gradient-boosted decision trees, how does maximum tree depth affect bias/variance, overfitting risk, and training/inference cost, and how would you choose it in practice? (b) In neural networks, compare L1 vs L2 regularization and weight decay—how do they modify the objective, gradients, and learned parameters? (c) After applying dropout during training, what should happen at inference time, and why? (d) Define and contrast training vs inference for ML models, including data flows, randomness, and performance considerations.

Quick Answer: This question evaluates understanding of model complexity and regularization across gradient-boosted decision trees and neural networks—covering how tree depth impacts bias/variance and computational cost, how L1/L2 and weight decay modify objectives and learned parameters, dropout behavior at inference, and distinctions between training and inference phases in Machine Learning. It is commonly asked to assess reasoning about overfitting, generalization, computational and deployment trade-offs, testing domain knowledge in supervised learning and regularization, and requires primarily conceptual understanding with practical-application considerations.

Related Interview Questions

  • Evaluate Promotions for Uber Eats Users - Uber (medium)
  • Implement Streaming Clustering for Numbers - Uber
  • Build cold-start restaurant ratings - Uber (medium)
  • Implement CLIP Contrastive Loss - Uber (medium)
  • Predict driver acceptance - Uber (medium)
Uber logo
Uber
Sep 6, 2025, 12:00 AM
Machine Learning Engineer
Onsite
Machine Learning
4
0

ML Conceptual Questions (Onsite)

Answer the following:

(a) Gradient-boosted decision trees: How does maximum tree depth affect bias/variance, overfitting risk, and training/inference cost? How would you choose it in practice?

(b) Neural networks: Compare L1 vs L2 regularization and weight decay — how do they modify the objective, gradients, and learned parameters?

(c) Dropout: After applying dropout during training, what should happen at inference time, and why?

(d) Training vs inference: Define and contrast these phases for ML models, including data flows, randomness, and performance considerations.

Solution

Show

Comments (0)

Sign in to leave a comment

Loading comments...

Browse More Questions

More Machine Learning•More Uber•More Machine Learning Engineer•Uber Machine Learning Engineer•Uber Machine Learning•Machine Learning Engineer Machine Learning
PracHub

Master your tech interviews with 7,500+ real questions from top companies.

Product

  • Questions
  • Learning Tracks
  • Interview Guides
  • Resources
  • Premium
  • For Universities
  • Student Access

Browse

  • By Company
  • By Role
  • By Category
  • Topic Hubs
  • SQL Questions
  • Compare Platforms
  • Discord Community

Support

  • support@prachub.com
  • (916) 541-4762

Legal

  • Privacy Policy
  • Terms of Service
  • About Us

© 2026 PracHub. All rights reserved.