Apple Paper Throws Shade on LLMs: Are Large Reasoning Models Fundamentally Limited?
2025-06-16
A recent Apple paper claims that Large Reasoning Models (LRMs) have limitations in exact computation, failing to utilize explicit algorithms and reasoning inconsistently across puzzles. This is considered a significant blow to the current push for using LLMs and LRMs as the basis for AGI. A rebuttal paper on arXiv attempts to counter Apple's findings, but it's flawed. It contains mathematical errors, conflates mechanical execution with reasoning complexity, and its own data contradicts its conclusions. Critically, the rebuttal ignores Apple's key finding that models systematically reduce computational effort on harder problems, suggesting fundamental scaling limitations in current LRM architectures.