Survey on factuality in large language models: Knowledge, retrieval and domain-specificity
This survey addresses the crucial issue of factuality in Large Language Models (LLMs). As
LLMs find applications across diverse domains, the reliability and accuracy of their outputs …
LLMs find applications across diverse domains, the reliability and accuracy of their outputs …
A survey of hallucination in large foundation models
Hallucination in a foundation model (FM) refers to the generation of content that strays from
factual reality or includes fabricated information. This survey paper provides an extensive …
factual reality or includes fabricated information. This survey paper provides an extensive …
Siren's song in the AI ocean: a survey on hallucination in large language models
While large language models (LLMs) have demonstrated remarkable capabilities across a
range of downstream tasks, a significant concern revolves around their propensity to exhibit …
range of downstream tasks, a significant concern revolves around their propensity to exhibit …
Cognitive mirage: A review of hallucinations in large language models
As large language models continue to develop in the field of AI, text generation systems are
susceptible to a worrisome phenomenon known as hallucination. In this study, we …
susceptible to a worrisome phenomenon known as hallucination. In this study, we …
Towards trustworthy LLMs: a review on debiasing and dehallucinating in large language models
Z Lin, S Guan, W Zhang, H Zhang, Y Li… - Artificial Intelligence …, 2024 - Springer
Recently, large language models (LLMs) have attracted considerable attention due to their
remarkable capabilities. However, LLMs' generation of biased or hallucinatory content …
remarkable capabilities. However, LLMs' generation of biased or hallucinatory content …
Mitigating large language model hallucinations via autonomous knowledge graph-based retrofitting
Incorporating factual knowledge in knowledge graph is regarded as a promising approach
for mitigating the hallucination of large language models (LLMs). Existing methods usually …
for mitigating the hallucination of large language models (LLMs). Existing methods usually …
The dawn after the dark: An empirical study on factuality hallucination in large language models
In the era of large language models (LLMs), hallucination (ie, the tendency to generate
factually incorrect content) poses great challenge to trustworthy and reliable deployment of …
factually incorrect content) poses great challenge to trustworthy and reliable deployment of …
Internal consistency and self-feedback in large language models: A survey
Large language models (LLMs) often exhibit deficient reasoning or generate hallucinations.
To address these, studies prefixed with" Self-" such as Self-Consistency, Self-Improve, and …
To address these, studies prefixed with" Self-" such as Self-Consistency, Self-Improve, and …
Risk taxonomy, mitigation, and assessment benchmarks of large language model systems
Large language models (LLMs) have strong capabilities in solving diverse natural language
processing tasks. However, the safety and security issues of LLM systems have become the …
processing tasks. However, the safety and security issues of LLM systems have become the …
[PDF][PDF] Trustworthiness in retrieval-augmented generation systems: A survey
Retrieval-Augmented Generation (RAG) has quickly grown into a pivotal paradigm in the
development of Large Language Models (LLMs). While much of the current research in this …
development of Large Language Models (LLMs). While much of the current research in this …