PracHub
QuestionsPremiumLearningGuidesCheatsheetNEWCoaches
|Home/Statistics & Math/Apple

Write the logistic regression loss function

Last updated: Mar 29, 2026

Quick Overview

This question evaluates understanding of logistic regression loss formulation and L2 regularization, focusing on the binary probabilistic model and the per-example and aggregated objective (binary logistic/cross-entropy) used during training.

  • Easy
  • Apple
  • Statistics & Math
  • Data Scientist

Write the logistic regression loss function

Company: Apple

Role: Data Scientist

Category: Statistics & Math

Difficulty: Easy

Interview Round: Technical Screen

## Logistic Regression Loss Consider binary logistic regression. - Dataset: \(\{(\mathbf{x}_i, y_i)\}_{i=1}^n\) - Labels: \(y_i \in \{0,1\}\) - Model: \(p_i = P(y_i=1\mid \mathbf{x}_i) = \sigma(\mathbf{w}^\top \mathbf{x}_i + b)\), where \(\sigma(z)=\frac{1}{1+e^{-z}}\). ### Question 1. Write the per-example loss. 2. Write the total loss over \(n\) examples (average or sum). 3. (Optional) Write the L2-regularized objective.

Quick Answer: This question evaluates understanding of logistic regression loss formulation and L2 regularization, focusing on the binary probabilistic model and the per-example and aggregated objective (binary logistic/cross-entropy) used during training.

Related Interview Questions

  • Critique a Linear Regression Workflow - Apple (easy)
  • How would you critique this regression? - Apple (easy)
  • Compare Normal vs Poisson; test dispersion and approximate tails - Apple (Medium)
  • Differentiate P-value and Confidence Interval in Statistics - Apple (medium)
  • Compare Normal and Poisson Distributions in Statistics - Apple (medium)
Apple logo
Apple
Jul 15, 2025, 12:00 AM
Data Scientist
Technical Screen
Statistics & Math
3
0

Logistic Regression Loss

Consider binary logistic regression.

  • Dataset: {(xi,yi)}i=1n\{(\mathbf{x}_i, y_i)\}_{i=1}^n{(xi​,yi​)}i=1n​
  • Labels: yi∈{0,1}y_i \in \{0,1\}yi​∈{0,1}
  • Model: pi=P(yi=1∣xi)=σ(w⊤xi+b)p_i = P(y_i=1\mid \mathbf{x}_i) = \sigma(\mathbf{w}^\top \mathbf{x}_i + b)pi​=P(yi​=1∣xi​)=σ(w⊤xi​+b) , where σ(z)=11+e−z\sigma(z)=\frac{1}{1+e^{-z}}σ(z)=1+e−z1​ .

Question

  1. Write the per-example loss.
  2. Write the total loss over nnn examples (average or sum).
  3. (Optional) Write the L2-regularized objective.

Solution

Show

Comments (0)

Sign in to leave a comment

Loading comments...

Browse More Questions

More Statistics & Math•More Apple•More Data Scientist•Apple Data Scientist•Apple Statistics & Math•Data Scientist Statistics & Math
PracHub

Master your tech interviews with 7,500+ real questions from top companies.

Product

  • Questions
  • Learning Tracks
  • Interview Guides
  • Resources
  • Premium
  • For Universities
  • Student Access

Browse

  • By Company
  • By Role
  • By Category
  • Topic Hubs
  • SQL Questions
  • Compare Platforms
  • Discord Community

Support

  • support@prachub.com
  • (916) 541-4762

Legal

  • Privacy Policy
  • Terms of Service
  • About Us

© 2026 PracHub. All rights reserved.