Survey on factuality in large language models: Knowledge, retrieval and domain-specificity

C Wang, X Liu, Y Yue, X Tang, T Zhang… - arxiv preprint arxiv …, 2023 - arxiv.org
This survey addresses the crucial issue of factuality in Large Language Models (LLMs). As
LLMs find applications across diverse domains, the reliability and accuracy of their outputs …

A survey of hallucination in large foundation models

V Rawte, A Sheth, A Das - arxiv preprint arxiv:2309.05922, 2023 - arxiv.org
Hallucination in a foundation model (FM) refers to the generation of content that strays from
factual reality or includes fabricated information. This survey paper provides an extensive …

Siren's song in the AI ocean: a survey on hallucination in large language models

Y Zhang, Y Li, L Cui, D Cai, L Liu, T Fu… - arxiv preprint arxiv …, 2023 - arxiv.org
While large language models (LLMs) have demonstrated remarkable capabilities across a
range of downstream tasks, a significant concern revolves around their propensity to exhibit …

Cognitive mirage: A review of hallucinations in large language models

H Ye, T Liu, A Zhang, W Hua, W Jia - arxiv preprint arxiv:2309.06794, 2023 - arxiv.org
As large language models continue to develop in the field of AI, text generation systems are
susceptible to a worrisome phenomenon known as hallucination. In this study, we …

Towards trustworthy LLMs: a review on debiasing and dehallucinating in large language models

Z Lin, S Guan, W Zhang, H Zhang, Y Li… - Artificial Intelligence …, 2024 - Springer
Recently, large language models (LLMs) have attracted considerable attention due to their
remarkable capabilities. However, LLMs' generation of biased or hallucinatory content …

Mitigating large language model hallucinations via autonomous knowledge graph-based retrofitting

X Guan, Y Liu, H Lin, Y Lu, B He, X Han… - Proceedings of the AAAI …, 2024 - ojs.aaai.org
Incorporating factual knowledge in knowledge graph is regarded as a promising approach
for mitigating the hallucination of large language models (LLMs). Existing methods usually …

The dawn after the dark: An empirical study on factuality hallucination in large language models

J Li, J Chen, R Ren, X Cheng, WX Zhao, JY Nie… - arxiv preprint arxiv …, 2024 - arxiv.org
In the era of large language models (LLMs), hallucination (ie, the tendency to generate
factually incorrect content) poses great challenge to trustworthy and reliable deployment of …

Internal consistency and self-feedback in large language models: A survey

X Liang, S Song, Z Zheng, H Wang, Q Yu, X Li… - arxiv preprint arxiv …, 2024 - arxiv.org
Large language models (LLMs) often exhibit deficient reasoning or generate hallucinations.
To address these, studies prefixed with" Self-" such as Self-Consistency, Self-Improve, and …

Risk taxonomy, mitigation, and assessment benchmarks of large language model systems

T Cui, Y Wang, C Fu, Y **ao, S Li, X Deng, Y Liu… - arxiv preprint arxiv …, 2024 - arxiv.org
Large language models (LLMs) have strong capabilities in solving diverse natural language
processing tasks. However, the safety and security issues of LLM systems have become the …

[PDF][PDF] Trustworthiness in retrieval-augmented generation systems: A survey

Y Zhou, Y Liu, X Li, J **, H Qian, Z Liu, C Li… - arxiv preprint arxiv …, 2024 - zhouyujia.cn
Retrieval-Augmented Generation (RAG) has quickly grown into a pivotal paradigm in the
development of Large Language Models (LLMs). While much of the current research in this …