[PDF][PDF] A comprehensive survey of hallucination mitigation techniques in large language models

SM Tonmoy, SM Zaman, V Jain, A Rani… - arxiv preprint arxiv …, 2024 - amanchadha.com
Abstract As Large Language Models (LLMs) continue to advance in their ability to write
human-like text, a key challenge remains around their tendency to “hallucinate”–generating …

A survey of hallucination in large foundation models

V Rawte, A Sheth, A Das - arxiv preprint arxiv:2309.05922, 2023 - arxiv.org
Hallucination in a foundation model (FM) refers to the generation of content that strays from
factual reality or includes fabricated information. This survey paper provides an extensive …

A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions

L Huang, W Yu, W Ma, W Zhong, Z Feng… - ACM Transactions on …, 2025 - dl.acm.org
The emergence of large language models (LLMs) has marked a significant breakthrough in
natural language processing (NLP), fueling a paradigm shift in information acquisition …

Siren's song in the AI ocean: a survey on hallucination in large language models

Y Zhang, Y Li, L Cui, D Cai, L Liu, T Fu… - arxiv preprint arxiv …, 2023 - arxiv.org
While large language models (LLMs) have demonstrated remarkable capabilities across a
range of downstream tasks, a significant concern revolves around their propensity to exhibit …

Large language models cannot self-correct reasoning yet

J Huang, X Chen, S Mishra, HS Zheng, AW Yu… - arxiv preprint arxiv …, 2023 - arxiv.org
Large Language Models (LLMs) have emerged as a groundbreaking technology with their
unparalleled text generation capabilities across various applications. Nevertheless …

Chain-of-verification reduces hallucination in large language models

S Dhuliawala, M Komeili, J Xu, R Raileanu, X Li… - arxiv preprint arxiv …, 2023 - arxiv.org
Generation of plausible yet incorrect factual information, termed hallucination, is an
unsolved issue in large language models. We study the ability of language models to …

Enabling large language models to generate text with citations

T Gao, H Yen, J Yu, D Chen - arxiv preprint arxiv:2305.14627, 2023 - arxiv.org
Large language models (LLMs) have emerged as a widely-used tool for information
seeking, but their generated outputs are prone to hallucination. In this work, our aim is to …

Dspy: Compiling declarative language model calls into self-improving pipelines

O Khattab, A Singhvi, P Maheshwari, Z Zhang… - arxiv preprint arxiv …, 2023 - arxiv.org
The ML community is rapidly exploring techniques for prompting language models (LMs)
and for stacking them into pipelines that solve complex tasks. Unfortunately, existing LM …

Survey on factuality in large language models: Knowledge, retrieval and domain-specificity

C Wang, X Liu, Y Yue, X Tang, T Zhang… - arxiv preprint arxiv …, 2023 - arxiv.org
This survey addresses the crucial issue of factuality in Large Language Models (LLMs). As
LLMs find applications across diverse domains, the reliability and accuracy of their outputs …

A survey on LLM-generated text detection: Necessity, methods, and future directions

J Wu, S Yang, R Zhan, Y Yuan, LS Chao… - Computational …, 2025 - direct.mit.edu
The remarkable ability of large language models (LLMs) to comprehend, interpret, and
generate complex language has rapidly integrated LLM-generated text into various aspects …