Large Language Models' Hallucinations: The Missing Piece is Memory
2025-09-10

The author contrasts human and large language model (LLM) information processing by recounting a personal experience using a Ruby library. Humans possess sedimentary memory, allowing them to sense the origin and reliability of knowledge, thus avoiding random guesses. LLMs lack this experiential memory; their knowledge resembles inherited DNA rather than acquired skills, leading to hallucinations. The author argues that resolving LLM hallucinations requires new AI models capable of "living" in and learning from the real world.
AI