PracHub
QuestionsPremiumLearningGuidesCheatsheetNEWCoaches
|Home/Machine Learning/Capital One

Evaluate Models for Credit-Risk Scoring at Capital One

Last updated: Mar 29, 2026

Quick Overview

This question evaluates a data scientist's competency in credit-risk model selection, performance evaluation, class-imbalance handling, and interpretability within a regulated financial context.

  • medium
  • Capital One
  • Machine Learning
  • Data Scientist

Evaluate Models for Credit-Risk Scoring at Capital One

Company: Capital One

Role: Data Scientist

Category: Machine Learning

Difficulty: medium

Interview Round: Onsite

##### Scenario Technical deep dive – building a production model for credit-risk scoring at Capital One. ##### Question Compare logistic regression, random forest, and gradient boosting for credit-risk modeling; discuss pros and cons. Explain how you would evaluate model performance, handle class imbalance, and ensure model interpretability. ##### Hints ROC-AUC, KS, SMOTE/weighting, SHAP, compliance requirements.

Quick Answer: This question evaluates a data scientist's competency in credit-risk model selection, performance evaluation, class-imbalance handling, and interpretability within a regulated financial context.

Related Interview Questions

  • Deep-dive XGBoost handling and overfitting - Capital One (medium)
  • Build House Price Model Responsibly - Capital One (easy)
  • Design robber detection from surveillance video - Capital One (easy)
  • How would you design delay and watchlist models? - Capital One (medium)
  • Explain core ML concepts and lifecycle - Capital One (medium)
Capital One logo
Capital One
Aug 4, 2025, 10:55 AM
Data Scientist
Onsite
Machine Learning
15
0

Scenario

You are building a production-grade credit-risk scoring model (predicting probability of default within a fixed horizon) for Capital One. The model will be used for underwriting decisions and must meet performance, compliance, and interpretability requirements.

Task

Compare logistic regression, random forest, and gradient boosting for credit-risk modeling. For each, discuss pros and cons in this context. Then describe how you would:

  1. Evaluate model performance (both discrimination and calibration), including appropriate train/validation splits.
  2. Handle class imbalance in defaults.
  3. Ensure model interpretability and compliance-readiness.

Include specific metrics (e.g., ROC-AUC, KS), imbalance techniques (e.g., class weighting, SMOTE), and explainability approaches (e.g., SHAP) and how they fit into a regulated credit environment.

Solution

Show

Comments (0)

Sign in to leave a comment

Loading comments...

Browse More Questions

More Machine Learning•More Capital One•More Data Scientist•Capital One Data Scientist•Capital One Machine Learning•Data Scientist Machine Learning
PracHub

Master your tech interviews with 7,500+ real questions from top companies.

Product

  • Questions
  • Learning Tracks
  • Interview Guides
  • Resources
  • Premium
  • For Universities
  • Student Access

Browse

  • By Company
  • By Role
  • By Category
  • Topic Hubs
  • SQL Questions
  • Compare Platforms
  • Discord Community

Support

  • support@prachub.com
  • (916) 541-4762

Legal

  • Privacy Policy
  • Terms of Service
  • About Us

© 2026 PracHub. All rights reserved.