Beyond Stochastic Parrots: The Circuits of Large Language Models

Large language models (LLMs) have been dismissed by some as mere "stochastic parrots," simply memorizing and regurgitating statistical patterns from their training data. However, recent research reveals a more nuanced reality. Researchers have discovered complex internal "circuits"—self-learned algorithms that solve specific problem classes—within these models. These circuits enable generalization to unseen situations, such as generating rhyming couplets and even proactively planning the structure of these couplets. While limitations remain, these findings challenge the "stochastic parrot" narrative and raise deeper questions about the nature of model intelligence: can LLMs independently generate new circuits to solve entirely novel problems?