{"blocks": [{"key": "8e58180e", "text": "Scenario", "type": "header-two", "depth": 0, "inlineStyleRanges": [], "entityRanges": [], "data": {}}, {"key": "1dfc68f9", "text": "You are building a large-scale recommendation system and must choose and evaluate ensemble models.", "type": "unstyled", "depth": 0, "inlineStyleRanges": [], "entityRanges": [], "data": {}}, {"key": "07adfdd5", "text": "Question", "type": "header-two", "depth": 0, "inlineStyleRanges": [], "entityRanges": [], "data": {}}, {"key": "62bc3b02", "text": "Compare Random Forest and XGBoost in terms of bias–variance trade-off, training speed and interpretability. What is overfitting? List at least three techniques you would apply to reduce it in this context. Describe at least four evaluation metrics you would consider for the model and explain when each is preferable. Explain how LoRA adapts large transformers and contrast CNN, RNN, and Transformer architectures; include why attention helps with long-range dependencies. What causes gradient vanishing/exploding and how do batch-norm, residual connections or careful initialization mitigate it?", "type": "unstyled", "depth": 0, "inlineStyleRanges": [], "entityRanges": [], "data": {}}, {"key": "69fd5e1a", "text": "Hints", "type": "header-two", "depth": 0, "inlineStyleRanges": [], "entityRanges": [], "data": {}}, {"key": "cedbe96d", "text": "Think about model capacity, regularization, data augmentation, early stopping, cross-validation and metric selection.", "type": "unstyled", "depth": 0, "inlineStyleRanges": [], "entityRanges": [], "data": {}}], "entityMap": {}}