Apple Paper Challenges AI Reasoning: Not 'Real' Reasoning?

2025-06-09

Apple's recent paper, "The Illusion of Thinking," tests large language models' reasoning abilities on Tower of Hanoi puzzles. Results show models perform worse than non-reasoning models on simple problems; better on medium difficulty; but on complex problems, models give up, even when given the algorithm. The authors question the models' generalizable reasoning capabilities. However, this article argues the paper's use of Tower of Hanoi is flawed as a test. The models' 'giving up' may stem from avoiding numerous steps, not limited reasoning ability. Giving up after a certain number of steps doesn't mean models lack reasoning; this mirrors human behavior in complex problems.

AI