PracHub
QuestionsPremiumLearningGuidesCheatsheetNEWCoaches
|Home/Statistics & Math/Tesla

Explain and derive importance sampling estimators

Last updated: Mar 29, 2026

Quick Overview

This question evaluates understanding of importance sampling, Monte Carlo estimators, weight normalization, variance behavior, optimal proposal selection, and effective sample size, assessing competency in statistical estimation and Monte Carlo methods.

  • hard
  • Tesla
  • Statistics & Math
  • Machine Learning Engineer

Explain and derive importance sampling estimators

Company: Tesla

Role: Machine Learning Engineer

Category: Statistics & Math

Difficulty: hard

Interview Round: Technical Screen

Explain importance sampling. Derive an estimator for mu = E_p[f(X)] when sampling from a proposal q whose support covers that of p. Show the unnormalized estimator using weights w(x) = p(x) / q(x) and the self-normalized estimator, and discuss bias and variance properties of each. Derive how the variance depends on the choice of q and why an ideal proposal is proportional to |f(x)| p(x). Define and interpret effective sample size (ESS). Provide a concrete numerical example (e.g., estimating an integral or expectation under a target Gaussian using a different Gaussian proposal), include pseudocode, and discuss pitfalls such as weight degeneracy, heavy-tailed proposals, and high-variance tails. Optionally relate importance sampling to off-policy evaluation in reinforcement learning and resampling in particle filters.

Quick Answer: This question evaluates understanding of importance sampling, Monte Carlo estimators, weight normalization, variance behavior, optimal proposal selection, and effective sample size, assessing competency in statistical estimation and Monte Carlo methods.

Tesla logo
Tesla
Aug 12, 2025, 12:00 AM
Machine Learning Engineer
Technical Screen
Statistics & Math
4
0

Importance Sampling: Estimators, Properties, Optimal Proposals, and ESS

Context

You want to estimate an expectation under a target distribution p over X:
mu = E_p[f(X)] = ∫ f(x) p(x) dx.
Direct sampling from p may be hard, but you can sample from an easier proposal q whose support covers that of p.

Tasks

  1. Derive the importance sampling identity and the estimator for mu when sampling X_i ~ q.
  2. Show both:
    • The unnormalized estimator using weights w(x) = p(x)/q(x).
    • The self-normalized estimator.
  3. Discuss bias and variance of each estimator and conditions for finite variance.
  4. Derive how the variance depends on q and why the ideal proposal is proportional to |f(x)| p(x).
  5. Define and interpret effective sample size (ESS).
  6. Provide a concrete numerical example (e.g., target Gaussian, different Gaussian proposal), with pseudocode.
  7. Discuss pitfalls: weight degeneracy, heavy-tailed proposals, high-variance tails. Optionally relate to off-policy evaluation (RL) and resampling in particle filters.

Solution

Show

Comments (0)

Sign in to leave a comment

Loading comments...

Browse More Questions

More Statistics & Math•More Tesla•More Machine Learning Engineer•Tesla Machine Learning Engineer•Tesla Statistics & Math•Machine Learning Engineer Statistics & Math
PracHub

Master your tech interviews with 7,500+ real questions from top companies.

Product

  • Questions
  • Learning Tracks
  • Interview Guides
  • Resources
  • Premium
  • For Universities
  • Student Access

Browse

  • By Company
  • By Role
  • By Category
  • Topic Hubs
  • SQL Questions
  • Compare Platforms
  • Discord Community

Support

  • support@prachub.com
  • (916) 541-4762

Legal

  • Privacy Policy
  • Terms of Service
  • About Us

© 2026 PracHub. All rights reserved.