This question evaluates a candidate's understanding of the causes and mitigation of large language model hallucinations, covering competencies in probabilistic training objectives, data characteristics, model and optimization limitations, inference-time behavior, and awareness of mitigation strategies.
Large language models (LLMs) are known to "hallucinate"—that is, they sometimes produce fluent, confident answers that are factually incorrect or unsupported by any source.
Explain why LLMs hallucinate. In your answer, cover:
Login required