LLMs' Daydreaming Loop: The Price of Breakthrough Innovation?

Despite their impressive capabilities, large language models (LLMs) have yet to produce a genuine breakthrough. The author proposes that this is because they lack a background processing mechanism akin to the human brain's default mode network. To address this, a 'daydreaming loop' (DDL) is suggested: a background process that continuously samples concept pairs from memory, explores non-obvious links, and filters for valuable ideas, creating a compounding feedback loop. While computationally expensive, this 'daydreaming tax' may be the necessary price for innovation and a competitive moat. Ultimately, expensive 'daydreaming AIs' might primarily generate training data for the next generation of efficient models, thus circumventing the looming data wall.