LLMs: Lossy Encyclopedias

2025-09-02

Large language models (LLMs) are like lossy encyclopedias; they contain a vast amount of information, but this information is compressed, leading to data loss. The key is discerning which questions LLMs can answer effectively versus those where the lossiness significantly impacts accuracy. For example, asking an LLM to create a Zephyr project skeleton with specific configurations is a 'lossless' question requiring precise details, which LLMs struggle with. The solution is to provide a correct example, allowing the LLM to operate on existing facts rather than relying on potentially missing details within its knowledge base.