Hallucination is inevitable: An innate limitation of large language models

Z Xu, S Jain, M Kankanhalli - arxiv preprint arxiv:2401.11817, 2024 - arxiv.org
Hallucination has been widely recognized to be a significant drawback for large language
models (LLMs). There have been many works that attempt to reduce the extent of …

Confabulation: The surprising value of large language model hallucinations

P Sui, E Duede, S Wu, RJ So - arxiv preprint arxiv:2406.04175, 2024 - arxiv.org
This paper presents a systematic defense of large language model (LLM) hallucinations
or'confabulations' as a potential resource instead of a categorically negative pitfall. The …

Literature Review of AI Hallucination Research Since the Advent of ChatGPT: Focusing on Papers from arxiv

DM Park, HJ Lee - Informatization Policy, 2024 - koreascience.kr
Hallucination is a significant barrier to the utilization of large-scale language models or
multimodal models. In this study, we collected 654 computer science papers with" …

Small agent can also rock! empowering small language models as hallucination detector

X Cheng, J Li, WX Zhao, H Zhang, F Zhang… - arxiv preprint arxiv …, 2024 - arxiv.org
Hallucination detection is a challenging task for large language models (LLMs), and existing
studies heavily rely on powerful closed-source LLMs such as GPT-4. In this paper, we …

[HTML][HTML] Patient-friendly discharge summaries in Korea based on ChatGPT: software development and validation

H Kim, HM **, YB Jung… - Journal of Korean Medical …, 2024 - synapse.koreamed.org
Background Although discharge summaries in patient-friendly language can enhance
patient comprehension and satisfaction, they can also increase medical staff workload …

OnionEval: An Unified Evaluation of Fact-conflicting Hallucination for Small-Large Language Models

C Sun, Y Li, D Wu, B Boulet - arxiv preprint arxiv:2501.12975, 2025 - arxiv.org
Large Language Models (LLMs) are highly capable but require significant computational
resources for both training and inference. Within the LLM family, smaller models (those with …

A Unified Hallucination Mitigation Framework for Large Vision-Language Models

Y Chang, L **g, X Zhang, Y Zhang - arxiv preprint arxiv:2409.16494, 2024 - arxiv.org
Hallucination is a common problem for Large Vision-Language Models (LVLMs) with long
generations which is difficult to eradicate. The generation with hallucinations is partially …

[PDF][PDF] 챗 GPT 등장 이후 인공지능 환각 연구의 문헌 검토: 아카이브 (arxiv) 의 논문을 중심으로

박대민, 이한종 - 정보화정책, 2024 - raw.githubusercontent.com
Hallucination is a significant barrier to the utilization of large-scale language models or
multimodal models. In this study, we collected 654 computer science papers with …

A Roadmap for Software Testing in Open-Collaborative and AI-Powered Era

Q Wang, J Wang, M Li, Y Wang, Z Liu - ACM Transactions on Software …, 2024 - dl.acm.org
Internet technology has given rise to an open-collaborative software development paradigm,
necessitating the open-collaborative schema to software testing. It enables diverse and …

Dynamic Attention-Guided Context Decoding for Mitigating Context Faithfulness Hallucinations in Large Language Models

Y Huang, Y Zhang, N Cheng, Z Li, S Wang… - arxiv preprint arxiv …, 2025 - arxiv.org
Large language models (LLMs) often suffer from context faithfulness hallucinations, where
outputs deviate from retrieved information due to insufficient context utilization and high …