LLMs: Manipulating Symbols or Understanding the World?
2025-06-04

This article challenges the prevailing assumption that Large Language Models (LLMs) understand the world. While LLMs excel at language tasks, the author argues this stems from their ability to learn heuristics for predicting the next token, rather than building a genuine world model. True AGI, the author contends, requires a deep understanding of the physical world, a capability currently lacking in LLMs. The article criticizes the multimodal approach to AGI, advocating instead for embodied cognition and interaction with the environment as primary components of future research.
Read more