PracHub
QuestionsPremiumLearningGuidesCheatsheetNEWCoaches
|Home/Statistics & Math/Microsoft

Test classifier difference with McNemar's test

Last updated: Mar 29, 2026

Quick Overview

This question evaluates understanding of paired-proportion hypothesis testing and comparative classifier evaluation using McNemar's test, exact binomial inference, paired confidence intervals, and multiple-comparison considerations in the Statistics & Math domain for a Data Scientist role.

  • medium
  • Microsoft
  • Statistics & Math
  • Data Scientist

Test classifier difference with McNemar's test

Company: Microsoft

Role: Data Scientist

Category: Statistics & Math

Difficulty: medium

Interview Round: Onsite

You evaluated two classifiers A and B on the SAME 10,000 labeled examples. The paired outcomes are: - Both correct n11 = 8,740; both wrong n00 = 740; A correct/B wrong n10 = 300; A wrong/B correct n01 = 220. Answer: 1) Using McNemar's test with continuity correction, compute the test statistic and p-value for H0: error rates are equal. Show intermediate numbers (b, c, |b−c|, b+c). 2) Compute the exact binomial p-value for the same H0 using b+c trials. Explain when you prefer the exact test. 3) Give a 95% confidence interval for the accuracy difference (A−B) on paired data; state which method you use and why. 4) Discuss assumptions, when McNemar's test is inappropriate, and how you'd adjust if you compare A against 10 models (multiple testing control).

Quick Answer: This question evaluates understanding of paired-proportion hypothesis testing and comparative classifier evaluation using McNemar's test, exact binomial inference, paired confidence intervals, and multiple-comparison considerations in the Statistics & Math domain for a Data Scientist role.

Related Interview Questions

  • Choose Classification Metrics Under Asymmetric Costs - Microsoft (medium)
  • Use confusion matrix to choose model metric - Microsoft (easy)
  • Compute sample size and analyze A/B results - Microsoft (medium)
  • Compute P(Bag B | red) via Bayes - Microsoft (easy)
Microsoft logo
Microsoft
Oct 13, 2025, 9:49 PM
Data Scientist
Onsite
Statistics & Math
1
0

Paired Comparison of Two Classifiers via McNemar's Test

You evaluated two classifiers on the same 10,000 labeled examples and summarized the paired outcomes in a 2×2 table:

  • Both correct: n11 = 8,740
  • Both wrong: n00 = 740
  • A correct / B wrong: n10 = 300
  • A wrong / B correct: n01 = 220

Let b = n10 (A correct, B wrong) and c = n01 (A wrong, B correct).

Answer the following:

  1. Using McNemar's test with continuity correction, test H0: the error rates are equal for A and B. Compute and show the intermediate numbers (b, c, |b − c|, b + c), the test statistic, and the p-value.
  2. Compute the exact binomial two-sided p-value for the same H0 by conditioning on b + c. Explain when you would prefer the exact test over the asymptotic McNemar test.
  3. Provide a 95% confidence interval for the paired accuracy difference (A − B). State which method you use and why.
  4. Discuss the assumptions behind McNemar's test, when it is inappropriate, and how you would adjust for multiple testing if comparing A against 10 other models.

Solution

Show

Comments (0)

Sign in to leave a comment

Loading comments...

Browse More Questions

More Statistics & Math•More Microsoft•More Data Scientist•Microsoft Data Scientist•Microsoft Statistics & Math•Data Scientist Statistics & Math
PracHub

Master your tech interviews with 7,500+ real questions from top companies.

Product

  • Questions
  • Learning Tracks
  • Interview Guides
  • Resources
  • Premium
  • For Universities
  • Student Access

Browse

  • By Company
  • By Role
  • By Category
  • Topic Hubs
  • SQL Questions
  • Compare Platforms
  • Discord Community

Support

  • support@prachub.com
  • (916) 541-4762

Legal

  • Privacy Policy
  • Terms of Service
  • About Us

© 2026 PracHub. All rights reserved.