
Classifying reviewers as lazy or careful with limited labels
You are auditing a pool of reviewers who can be either:
Assume a known prior mixture π = P(L) and per-review accuracies a_L and a_C with a_C > a_L. For each reviewer, you observe their performance on n gold items (with known ground truth), yielding k correct out of n.
Hints: Treat reviewer type as the latent class and use a Bayesian optimal decision boundary; error rates shrink as review count grows.
Login required