{"blocks": [{"key": "0931a947", "text": "Scenario", "type": "header-two", "depth": 0, "inlineStyleRanges": [], "entityRanges": [], "data": {}}, {"key": "e10ef61b", "text": "Company screens ML engineers with a 90-minute CodeSignal test containing conceptual MCQs and Python modeling tasks.", "type": "unstyled", "depth": 0, "inlineStyleRanges": [], "entityRanges": [], "data": {}}, {"key": "5fb9e33e", "text": "Question", "type": "header-two", "depth": 0, "inlineStyleRanges": [], "entityRanges": [], "data": {}}, {"key": "0051f5f4", "text": "State and interpret the bias and variance terms in the bias–variance decomposition. Which regularization technique(s) can shrink linear-model coefficients exactly to zero and why? Name two practical approaches for detecting data leakage in a supervised learning pipeline. Given dataframe df(user_id, event_time, event_type, purchase), build a binary classifier predicting whether a user will purchase within the next 7 days and report AUC on a held-out set. Implement logistic regression with gradient descent using only numpy; provide convergence diagnostics.", "type": "unstyled", "depth": 0, "inlineStyleRanges": [], "entityRanges": [], "data": {}}, {"key": "e8644bc5", "text": "Hints", "type": "header-two", "depth": 0, "inlineStyleRanges": [], "entityRanges": [], "data": {}}, {"key": "1f5ac915", "text": "Discuss bias-variance trade-off, L1 geometry, validation splits, temporal leakage checks, and write clean, vectorized Python.", "type": "unstyled", "depth": 0, "inlineStyleRanges": [], "entityRanges": [], "data": {}}], "entityMap": {}}