LinkedIn Jobs Recommender Upgrade — Metrics, Experiment Design, Powering, and Diagnostics
Context
LinkedIn is upgrading the algorithm that recommends jobs to members across surfaces such as the Jobs tab, homepage modules, and notifications. This is a two‑sided marketplace: member outcomes (finding relevant jobs, applying) must improve without harming employer outcomes (quality and distribution of applications). Assume we can log impressions, positions, clicks, saves, apply starts/completions, response/latency, and eligibility sets per request.
Questions
-
Which key offline and online metrics would you track to judge success?
-
How would you design an A/B test to compare the new model against the current one while mitigating network effects?
-
How would you determine required sample size, exposure, and test duration?
-
If performance differs across user segments, how would you diagnose root causes and iterate?