A survey of hallucination in large foundation models

V Rawte, A Sheth, A Das - arxiv preprint arxiv:2309.05922, 2023 - arxiv.org
Hallucination in a foundation model (FM) refers to the generation of content that strays from
factual reality or includes fabricated information. This survey paper provides an extensive …

Survey on factuality in large language models: Knowledge, retrieval and domain-specificity

C Wang, X Liu, Y Yue, X Tang, T Zhang… - arxiv preprint arxiv …, 2023 - arxiv.org
This survey addresses the crucial issue of factuality in Large Language Models (LLMs). As
LLMs find applications across diverse domains, the reliability and accuracy of their outputs …

Siren's song in the AI ocean: a survey on hallucination in large language models

Y Zhang, Y Li, L Cui, D Cai, L Liu, T Fu… - arxiv preprint arxiv …, 2023 - arxiv.org
While large language models (LLMs) have demonstrated remarkable capabilities across a
range of downstream tasks, a significant concern revolves around their propensity to exhibit …

Survey of hallucination in natural language generation

Z Ji, N Lee, R Frieske, T Yu, D Su, Y Xu, E Ishii… - ACM computing …, 2023 - dl.acm.org
Natural Language Generation (NLG) has improved exponentially in recent years thanks to
the development of sequence-to-sequence deep learning technologies such as Transformer …

Cognitive mirage: A review of hallucinations in large language models

H Ye, T Liu, A Zhang, W Hua, W Jia - arxiv preprint arxiv:2309.06794, 2023 - arxiv.org
As large language models continue to develop in the field of AI, text generation systems are
susceptible to a worrisome phenomenon known as hallucination. In this study, we …

Mitigating large language model hallucinations via autonomous knowledge graph-based retrofitting

X Guan, Y Liu, H Lin, Y Lu, B He, X Han… - Proceedings of the AAAI …, 2024 - ojs.aaai.org
Incorporating factual knowledge in knowledge graph is regarded as a promising approach
for mitigating the hallucination of large language models (LLMs). Existing methods usually …

The dawn after the dark: An empirical study on factuality hallucination in large language models

J Li, J Chen, R Ren, X Cheng, WX Zhao, JY Nie… - arxiv preprint arxiv …, 2024 - arxiv.org
In the era of large language models (LLMs), hallucination (ie, the tendency to generate
factually incorrect content) poses great challenge to trustworthy and reliable deployment of …

Contextcite: Attributing model generation to context

B Cohen-Wang, H Shah… - Advances in Neural …, 2025 - proceedings.neurips.cc
How do language models use information provided as context when generating a
response? Can we infer whether a particular generated statement is actually grounded in …

Risk taxonomy, mitigation, and assessment benchmarks of large language model systems

T Cui, Y Wang, C Fu, Y **ao, S Li, X Deng, Y Liu… - arxiv preprint arxiv …, 2024 - arxiv.org
Large language models (LLMs) have strong capabilities in solving diverse natural language
processing tasks. However, the safety and security issues of LLM systems have become the …

Towards trustworthy LLMs: a review on debiasing and dehallucinating in large language models

Z Lin, S Guan, W Zhang, H Zhang, Y Li… - Artificial Intelligence …, 2024 - Springer
Recently, large language models (LLMs) have attracted considerable attention due to their
remarkable capabilities. However, LLMs' generation of biased or hallucinatory content …