Anthropic's Claude Gets a Citation API to Fight Hallucinations
2025-01-28

Anthropic launched a new Citations API that integrates Retrieval Augmented Generation (RAG) directly into its Claude models. This combats AI hallucinations by directly linking responses to source documents. Developers can add documents to Claude's context, allowing it to cite specific passages used in generating answers. Internal testing showed a 15 percent improvement in recall accuracy. Early adopters like Thomson Reuters and Endex report positive results, including reduced confabulations and increased references. While further research is needed, this represents a significant step toward more reliable AI.
AI