PracHub
QuestionsPremiumLearningGuidesCheatsheetNEWCoaches
|Home/Machine Learning/Coinbase

Explain precision/recall and compute NN output

Last updated: Mar 29, 2026

Quick Overview

This question evaluates understanding of classification evaluation metrics (precision, recall, F1), ensemble learning principles (bagging vs boosting, error correlation and variance reduction), and numerical proficiency in neural network forward propagation within the Machine Learning domain.

  • hard
  • Coinbase
  • Machine Learning
  • Machine Learning Engineer

Explain precision/recall and compute NN output

Company: Coinbase

Role: Machine Learning Engineer

Category: Machine Learning

Difficulty: hard

Interview Round: Take-home Project

You are given a short ML fundamentals assessment with three parts. ## Part A — Precision/Recall/F1 A binary classifier on a dataset produced the following confusion-matrix counts: - True Positives (TP) = 40 - False Positives (FP) = 10 - False Negatives (FN) = 20 - True Negatives (TN) = 130 1. Compute **precision**, **recall**, and **F1**. 2. If you **raise the decision threshold**, what typically happens to precision and recall (and why)? 3. In an **imbalanced** dataset where positives are rare, when would you optimize for precision vs. recall? ## Part B — Ensemble learning (select all correct) For each statement, mark whether it is generally **True** or **False**, and briefly justify. 1. Bagging primarily reduces **variance**. 2. Bagging uses **bootstrap sampling** (sampling with replacement) to train each base model. 3. Boosting trains base learners **independently** and can be fully parallelized without changing the algorithm. 4. Ensembles can improve generalization because averaging/voting can cancel out uncorrelated errors. 5. If the base learners are perfectly correlated (always make the same predictions), bagging will provide large gains. ## Part C — Forward pass of a small neural network Compute the network output for the following fully connected network. Use the sigmoid activation \(\sigma(t)=\frac{1}{1+e^{-t}}\) at **both** layers. - Input \(x = [1.0,\ 2.0]^T\) - Hidden layer (2 units): \(h = \sigma(W_1 x + b_1)\) - \(W_1 = \begin{bmatrix}0.5 & -1.0\\ 1.0 & 0.5\end{bmatrix}\), \(b_1 = \begin{bmatrix}0.0\\ -0.5\end{bmatrix}\) - Output layer (1 unit): \(\hat{y} = \sigma(W_2 h + b_2)\) - \(W_2 = \begin{bmatrix}1.5 & -2.0\end{bmatrix}\), \(b_2 = 0.1\) Return \(\hat{y}\) as a decimal rounded to 4 digits.

Quick Answer: This question evaluates understanding of classification evaluation metrics (precision, recall, F1), ensemble learning principles (bagging vs boosting, error correlation and variance reduction), and numerical proficiency in neural network forward propagation within the Machine Learning domain.

Related Interview Questions

  • Build a baseline classification model from messy data - Coinbase (medium)
  • Build and evaluate a conversion prediction model - Coinbase (hard)
  • How to Analyze and Model Behavioral Data Effectively? - Coinbase (hard)
Coinbase logo
Coinbase
Feb 1, 2026, 12:00 AM
Machine Learning Engineer
Take-home Project
Machine Learning
3
0
Loading...

You are given a short ML fundamentals assessment with three parts.

Part A — Precision/Recall/F1

A binary classifier on a dataset produced the following confusion-matrix counts:

  • True Positives (TP) = 40
  • False Positives (FP) = 10
  • False Negatives (FN) = 20
  • True Negatives (TN) = 130
  1. Compute precision , recall , and F1 .
  2. If you raise the decision threshold , what typically happens to precision and recall (and why)?
  3. In an imbalanced dataset where positives are rare, when would you optimize for precision vs. recall?

Part B — Ensemble learning (select all correct)

For each statement, mark whether it is generally True or False, and briefly justify.

  1. Bagging primarily reduces variance .
  2. Bagging uses bootstrap sampling (sampling with replacement) to train each base model.
  3. Boosting trains base learners independently and can be fully parallelized without changing the algorithm.
  4. Ensembles can improve generalization because averaging/voting can cancel out uncorrelated errors.
  5. If the base learners are perfectly correlated (always make the same predictions), bagging will provide large gains.

Part C — Forward pass of a small neural network

Compute the network output for the following fully connected network. Use the sigmoid activation σ(t)=11+e−t\sigma(t)=\frac{1}{1+e^{-t}}σ(t)=1+e−t1​ at both layers.

  • Input x=[1.0, 2.0]Tx = [1.0,\ 2.0]^Tx=[1.0, 2.0]T
  • Hidden layer (2 units): h=σ(W1x+b1)h = \sigma(W_1 x + b_1)h=σ(W1​x+b1​)
    • W1=[0.5−1.01.00.5]W_1 = \begin{bmatrix}0.5 & -1.0\\ 1.0 & 0.5\end{bmatrix}W1​=[0.51.0​−1.00.5​] , b1=[0.0−0.5]b_1 = \begin{bmatrix}0.0\\ -0.5\end{bmatrix}b1​=[0.0−0.5​]
  • Output layer (1 unit): y^=σ(W2h+b2)\hat{y} = \sigma(W_2 h + b_2)y^​=σ(W2​h+b2​)
    • W2=[1.5−2.0]W_2 = \begin{bmatrix}1.5 & -2.0\end{bmatrix}W2​=[1.5​−2.0​] , b2=0.1b_2 = 0.1b2​=0.1

Return y^\hat{y}y^​ as a decimal rounded to 4 digits.

Solution

Show

Comments (0)

Sign in to leave a comment

Loading comments...

Browse More Questions

More Machine Learning•More Coinbase•More Machine Learning Engineer•Coinbase Machine Learning Engineer•Coinbase Machine Learning•Machine Learning Engineer Machine Learning
PracHub

Master your tech interviews with 7,500+ real questions from top companies.

Product

  • Questions
  • Learning Tracks
  • Interview Guides
  • Resources
  • Premium
  • For Universities
  • Student Access

Browse

  • By Company
  • By Role
  • By Category
  • Topic Hubs
  • SQL Questions
  • Compare Platforms
  • Discord Community

Support

  • support@prachub.com
  • (916) 541-4762

Legal

  • Privacy Policy
  • Terms of Service
  • About Us

© 2026 PracHub. All rights reserved.