Loading...
Loading...
Loading...
Reference hallucinations—citations that appear legitimate but do not correspond to any real publication—continue to receive growing attention in 2025. Researchers, editors, and publishers are increasingly aware that AI systems can produce fabricated citations that are highly convincing and difficult to detect manually.
Below is a structured summary of the latest peer-reviewed findings.
A 2025 evaluation by Cabezas-Clavijo & Sidorenko-Bautista examined 400 references generated by eight AI chatbots. Only 26.5% were fully correct, while 39.8% were fabricated or incorrect. This reinforces that the issue is widespread and platform-independent.
Multiple studies show that hallucinations occur because models generate patterns, not verified bibliographic records. Academic citations follow predictable structures, making it easy for AI systems to create plausible but nonexistent references.
Janse van Rensburg (2025) demonstrated that automated auditing can achieve a 91.7% verification rate across thousands of references, reducing months of manual work to two hours.
Glynn (2025) proposed requiring authors to deposit full texts of cited works to eliminate the possibility of hallucinated citations entering peer review.
Researchers must now verify every AI-generated reference. Publishers are exploring new policies for screening citations in submissions, including automated verification and cross-registry checks.
These findings align closely with the design philosophy behind SourceVerify: automated, fast, multi-registry verification that prevents fabricated citations from slipping into research, peer review, or publication.
The 2025 literature confirms that citation hallucinations remain common, structurally predictable, and costly to detect manually. Automated verification is now widely recognized as the most reliable solution.