Loading...
Loading...
Loading...
The short answer: No, you cannot trust citations generated by ChatGPT or other large language models without verification. Multiple studies have found that LLMs hallucinate between 30% and 70% of academic references, depending on the task and domain.
These aren't just minor errors. The citations often look completely legitimate—real author names, plausible journal titles, correct formatting—but point to articles that don't exist.
Several peer-reviewed studies have measured AI citation accuracy:
Large language models don't retrieve information from databases. They generate text by predicting what "looks right" based on patterns in their training data. When the model detects that a citation is expected, it constructs something that resembles a real reference.
The result is often a plausible-looking but completely fabricated citation. The model might combine:
This creates citations that pass visual inspection but fail database verification.
Using unverified AI-generated citations can result in:
Every citation from ChatGPT, Claude, or any other LLM should be verified before use. Manual verification involves:
This process is time-consuming for long bibliographies. SourceVerify automates this verification using the SVRIS standard, which provides transparent, auditable results showing exactly which fields matched and which sources were checked.
ChatGPT and other LLMs hallucinate academic citations at alarming rates. These fabricated references look real but point to non-existent publications. Never use AI-generated citations without verification— either manually or using automated tools like SourceVerify that implement the transparent SVRIS verification standard.