Survey on factuality in large language models: Knowledge, retrieval and domain-specificity

C Wang, X Liu, Y Yue, X Tang, T Zhang… - arxiv preprint arxiv …, 2023 - arxiv.org
This survey addresses the crucial issue of factuality in Large Language Models (LLMs). As
LLMs find applications across diverse domains, the reliability and accuracy of their outputs …

Combating misinformation in the age of llms: Opportunities and challenges

C Chen, K Shu - AI Magazine, 2024 - Wiley Online Library
Misinformation such as fake news and rumors is a serious threat for information ecosystems
and public trust. The emergence of large language models (LLMs) has great potential to …

Siren's song in the AI ocean: a survey on hallucination in large language models

Y Zhang, Y Li, L Cui, D Cai, L Liu, T Fu… - arxiv preprint arxiv …, 2023 - arxiv.org
While large language models (LLMs) have demonstrated remarkable capabilities across a
range of downstream tasks, a significant concern revolves around their propensity to exhibit …

A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions

L Huang, W Yu, W Ma, W Zhong, Z Feng… - ACM Transactions on …, 2024 - dl.acm.org
The emergence of large language models (LLMs) has marked a significant breakthrough in
natural language processing (NLP), fueling a paradigm shift in information acquisition …

Chain-of-verification reduces hallucination in large language models

S Dhuliawala, M Komeili, J Xu, R Raileanu, X Li… - arxiv preprint arxiv …, 2023 - arxiv.org
Generation of plausible yet incorrect factual information, termed hallucination, is an
unsolved issue in large language models. We study the ability of language models to …

Physics of language models: Part 3.1, knowledge storage and extraction

Z Allen-Zhu, Y Li - arxiv preprint arxiv:2309.14316, 2023 - arxiv.org
Large language models (LLMs) can store a vast amount of world knowledge, often
extractable via question-answering (eg," What is Abraham Lincoln's birthday?"). However …

A comprehensive survey of hallucination mitigation techniques in large language models

SM Tonmoy, SM Zaman, V Jain, A Rani… - arxiv preprint arxiv …, 2024 - arxiv.org
As Large Language Models (LLMs) continue to advance in their ability to write human-like
text, a key challenge remains around their tendency to hallucinate generating content that …

Attention satisfies: A constraint-satisfaction lens on factual errors of language models

M Yuksekgonul, V Chandrasekaran, E Jones… - arxiv preprint arxiv …, 2023 - arxiv.org
We investigate the internal behavior of Transformer-based Large Language Models (LLMs)
when they generate factually incorrect text. We propose modeling factual queries as …

Physics of language models: Part 3.2, knowledge manipulation

Z Allen-Zhu, Y Li - arxiv preprint arxiv:2309.14402, 2023 - arxiv.org
Language models can store vast amounts of factual knowledge, but their ability to use this
knowledge for logical reasoning remains questionable. This paper explores a language …

Fine tuning vs. retrieval augmented generation for less popular knowledge

H Soudani, E Kanoulas, F Hasibi - … of the 2024 Annual International ACM …, 2024 - dl.acm.org
Language Models (LMs) memorize a vast amount of factual knowledge, exhibiting strong
performance across diverse tasks and domains. However, it has been observed that the …