{"blocks": [{"key": "65b08dd0", "text": "Scenario", "type": "header-two", "depth": 0, "inlineStyleRanges": [], "entityRanges": [], "data": {}}, {"key": "3d76c437", "text": "Deep-learning discussion on LLM pipelines, knowledge-graph integration and retrieval-augmented generation.", "type": "unstyled", "depth": 0, "inlineStyleRanges": [], "entityRanges": [], "data": {}}, {"key": "9cc4cda2", "text": "Question", "type": "header-two", "depth": 0, "inlineStyleRanges": [], "entityRanges": [], "data": {}}, {"key": "a0b69f17", "text": "How would you control the cost of maintaining a knowledge graph used by an LLM? How do you measure the accuracy of LLM outputs, both offline and online? Compare Transformer, RNN and LSTM. Why are Transformers preferred for modern LLMs? Derive the scaled dot-product attention formula and explain each term. Explain the end-to-end workflow of Retrieval-Augmented Generation (RAG). What is a reranker model and where does it sit in the RAG stack? How does embedding vector dimensionality influence retrieval quality? What is LoRA, how does it work and why is it parameter-efficient? How would you evaluate the accuracy of a RAG system?", "type": "unstyled", "depth": 0, "inlineStyleRanges": [], "entityRanges": [], "data": {}}, {"key": "0f39e0ce", "text": "Hints", "type": "header-two", "depth": 0, "inlineStyleRanges": [], "entityRanges": [], "data": {}}, {"key": "5361954a", "text": "Relate theory to production: costs, equations, eval metrics (BLEU, EM, precision@k), trade-offs.", "type": "unstyled", "depth": 0, "inlineStyleRanges": [], "entityRanges": [], "data": {}}], "entityMap": {}}