PracHub
QuestionsPremiumLearningGuidesCheatsheetNEWCoaches
|Home/Coding & Algorithms/N/A

Implement Linear Regression Training

Last updated: Apr 6, 2026

Quick Overview

This question evaluates understanding of linear regression, mean squared error loss, gradient computation, parameter updates, and proficiency with vectorized numerical operations used in batch gradient descent.

  • N/A
  • Coding & Algorithms
  • Machine Learning Engineer

Implement Linear Regression Training

Company: N/A

Role: Machine Learning Engineer

Category: Coding & Algorithms

Interview Round: Onsite

Implement a function `train_linear_regression(X, y, lr, epochs)` that trains a linear regression model from scratch using batch gradient descent. Requirements: - `X` is an `n x d` matrix of real-valued features. - `y` is a length-`n` vector of real-valued targets. - Initialize all weights and the bias to `0`. - On each epoch, compute predictions `y_hat = Xw + b`. - Use mean squared error as the loss. - Compute the gradients for `w` and `b`. - Update parameters with the given learning rate. - Return the final weight vector and bias. Do not call a library routine that directly fits a regression model. Basic matrix or array operations are allowed.

Quick Answer: This question evaluates understanding of linear regression, mean squared error loss, gradient computation, parameter updates, and proficiency with vectorized numerical operations used in batch gradient descent.

Related Interview Questions

  • Compare Strings With Deletions - N/A (medium)
N/A logo
N/A
Mar 31, 2026, 12:00 AM
Machine Learning Engineer
Onsite
Coding & Algorithms
0
0
Loading...

Implement a function train_linear_regression(X, y, lr, epochs) that trains a linear regression model from scratch using batch gradient descent.

Requirements:

  • X is an n x d matrix of real-valued features.
  • y is a length- n vector of real-valued targets.
  • Initialize all weights and the bias to 0 .
  • On each epoch, compute predictions y_hat = Xw + b .
  • Use mean squared error as the loss.
  • Compute the gradients for w and b .
  • Update parameters with the given learning rate.
  • Return the final weight vector and bias.

Do not call a library routine that directly fits a regression model. Basic matrix or array operations are allowed.

Comments (0)

Sign in to leave a comment

Loading comments...

Browse More Questions

More Coding & Algorithms•More N/A•More Machine Learning Engineer•N/A Machine Learning Engineer•N/A Coding & Algorithms•Machine Learning Engineer Coding & Algorithms
PracHub

Master your tech interviews with 7,500+ real questions from top companies.

Product

  • Questions
  • Learning Tracks
  • Interview Guides
  • Resources
  • Premium
  • For Universities
  • Student Access

Browse

  • By Company
  • By Role
  • By Category
  • Topic Hubs
  • SQL Questions
  • Compare Platforms
  • Discord Community

Support

  • support@prachub.com
  • (916) 541-4762

Legal

  • Privacy Policy
  • Terms of Service
  • About Us

© 2026 PracHub. All rights reserved.