Evaluate Classifier with Precision, Recall, and Fairness Metrics
Company: Meta
Role: Data Scientist
Category: Machine Learning
Difficulty: medium
Interview Round: Technical Screen
Quick Answer: This question evaluates proficiency in designing an offline evaluation framework for binary machine-learning classifiers, covering selection of ranking and operating-point metrics, calibration and class-imbalance handling, ground-truth labeling protocols, thresholding under asymmetric costs and capacity constraints, and subgroup fairness analyses; it is in the Machine Learning domain and emphasizes practical application of evaluation and ML systems design. It is commonly asked in technical interviews because it probes both conceptual understanding of statistical and fairness trade-offs and the ability to translate business constraints into measurable evaluation criteria, testing applied reasoning rather than purely theoretical knowledge.