PracHub
QuestionsPremiumLearningGuidesCheatsheetNEWCoaches
|Home/Machine Learning/Apple

Explain CNN shapes, params, and trade-offs

Last updated: Mar 29, 2026

Quick Overview

This question evaluates understanding of convolutional neural network tensor shapes, parameter counts and multiply–accumulate (MAC) calculations, receptive field analysis, and architectural trade-offs including depthwise separable convolutions and normalization placement.

  • medium
  • Apple
  • Machine Learning
  • Data Scientist

Explain CNN shapes, params, and trade-offs

Company: Apple

Role: Data Scientist

Category: Machine Learning

Difficulty: medium

Interview Round: Onsite

You have an input tensor X of shape 64×64×3. Consider the following CNN: (L1) Conv 3×3, stride=1, padding=1, 32 filters; (L2) MaxPool 2×2, stride=2; (L3) Conv 3×3, dilation=2, stride=1, padding=2, 64 filters; (L4) Depthwise separable conv: depthwise 3×3, stride=1, padding=1 + pointwise 1×1 to 128 channels. Tasks: (a) Compute the output H×W×C after each layer. (b) Compute parameter counts and MACs for L1, L3, and L4, and compare L4’s MACs to a standard 3×3 conv with 64→128 channels. Show formulas and numbers. (c) Compute the receptive field size (in input pixels) of a single activation after L4. (d) Where would you place BatchNorm relative to activation for stable training (pre‑act vs post‑act) and why? (e) When would you prefer stride vs pooling vs dilation to preserve information while controlling compute? (f) Explain the bias–variance and optimization trade‑offs of using depthwise separable convolutions in tiny‑model regimes.

Quick Answer: This question evaluates understanding of convolutional neural network tensor shapes, parameter counts and multiply–accumulate (MAC) calculations, receptive field analysis, and architectural trade-offs including depthwise separable convolutions and normalization placement.

Related Interview Questions

  • Implement Masked Multi-Head Self-Attention - Apple (easy)
  • Compare DCN v1 vs v2 and A/B test - Apple (medium)
  • Explain dataset size, generalization, and U-Net skips - Apple (medium)
  • Analyze vision model failures - Apple (medium)
  • Compare audio preprocessing and training - Apple (medium)
Apple logo
Apple
Oct 13, 2025, 9:49 PM
Data Scientist
Onsite
Machine Learning
2
0
Loading...

CNN Shapes, Compute, and Design Trade-offs

Context

You are given an input tensor X with shape H×W×C = 64×64×3. Consider the following convolutional neural network (CNN):

  • L1: Conv 3×3, stride=1, padding=1, 32 filters
  • L2: MaxPool 2×2, stride=2
  • L3: Conv 3×3, dilation=2, stride=1, padding=2, 64 filters
  • L4: Depthwise separable conv: depthwise 3×3 (stride=1, padding=1) + pointwise 1×1 to 128 channels

Tasks

(a) Compute the output shape H×W×C after each layer.

(b) Compute parameter counts and MACs for L1, L3, and L4. Compare L4’s MACs to a standard 3×3 conv with 64→128 channels (same input size as L4). Show formulas and numbers.

(c) Compute the receptive field size (in input pixels) of a single activation after L4.

(d) Where would you place BatchNorm relative to activation for stable training (pre‑act vs post‑act) and why?

(e) When would you prefer stride vs pooling vs dilation to preserve information while controlling compute?

(f) Explain the bias–variance and optimization trade‑offs of using depthwise separable convolutions in tiny‑model regimes.

Solution

Show

Comments (0)

Sign in to leave a comment

Loading comments...

Browse More Questions

More Machine Learning•More Apple•More Data Scientist•Apple Data Scientist•Apple Machine Learning•Data Scientist Machine Learning
PracHub

Master your tech interviews with 7,500+ real questions from top companies.

Product

  • Questions
  • Learning Tracks
  • Interview Guides
  • Resources
  • Premium
  • For Universities
  • Student Access

Browse

  • By Company
  • By Role
  • By Category
  • Topic Hubs
  • SQL Questions
  • Compare Platforms
  • Discord Community

Support

  • support@prachub.com
  • (916) 541-4762

Legal

  • Privacy Policy
  • Terms of Service
  • About Us

© 2026 PracHub. All rights reserved.