Digital Fossils in AI: How Nonsense Terms Become Embedded in Our Knowledge

2025-05-01
Digital Fossils in AI: How Nonsense Terms Become Embedded in Our Knowledge

Scientists discovered the nonsensical term "vegetative electron microscopy" spreading through AI models. Originating from digitization errors in 1950s papers and amplified by translation mistakes, it became ingrained in large language models. This highlights the challenges of massive training datasets, lack of transparency, and self-perpetuating errors in AI. The incident poses serious issues for academic research and publishing, prompting reflection on maintaining reliable knowledge systems.