Cognitive mirage: A review of hallucinations in large language models

H Ye, T Liu, A Zhang, W Hua, W Jia - arxiv preprint arxiv:2309.06794, 2023 - arxiv.org
As large language models continue to develop in the field of AI, text generation systems are
susceptible to a worrisome phenomenon known as hallucination. In this study, we …

Factuality challenges in the era of large language models and opportunities for fact-checking

I Augenstein, T Baldwin, M Cha… - Nature Machine …, 2024 - nature.com
The emergence of tools based on large language models (LLMs), such as OpenAI's
ChatGPT and Google's Gemini, has garnered immense public attention owing to their …

Trustllm: Trustworthiness in large language models

Y Huang, L Sun, H Wang, S Wu, Q Zhang, Y Li… - arxiv preprint arxiv …, 2024 - arxiv.org
Large language models (LLMs), exemplified by ChatGPT, have gained considerable
attention for their excellent natural language processing capabilities. Nonetheless, these …

Woodpecker: Hallucination correction for multimodal large language models

S Yin, C Fu, S Zhao, T Xu, H Wang, D Sui… - Science China …, 2024 - Springer
Hallucinations is a big shadow hanging over the rapidly evolving multimodal large language
models (MLLMs), referring to that the generated text is inconsistent with the image content …

[HTML][HTML] Position: TrustLLM: Trustworthiness in large language models

Y Huang, L Sun, H Wang, S Wu… - International …, 2024 - proceedings.mlr.press
Large language models (LLMs) have gained considerable attention for their excellent
natural language processing capabilities. Nonetheless, these LLMs present many …

Factuality challenges in the era of large language models

I Augenstein, T Baldwin, M Cha, T Chakraborty… - arxiv preprint arxiv …, 2023 - arxiv.org
The emergence of tools based on Large Language Models (LLMs), such as OpenAI's
ChatGPT, Microsoft's Bing Chat, and Google's Bard, has garnered immense public attention …

Towards trustworthy LLMs: a review on debiasing and dehallucinating in large language models

Z Lin, S Guan, W Zhang, H Zhang, Y Li… - Artificial Intelligence …, 2024 - Springer
Recently, large language models (LLMs) have attracted considerable attention due to their
remarkable capabilities. However, LLMs' generation of biased or hallucinatory content …

Self-checker: Plug-and-play modules for fact-checking with large language models

M Li, B Peng, M Galley, J Gao, Z Zhang - arxiv preprint arxiv:2305.14623, 2023 - arxiv.org
Fact-checking is an essential task in NLP that is commonly utilized for validating the factual
accuracy of claims. Prior work has mainly focused on fine-tuning pre-trained languages …

Risk taxonomy, mitigation, and assessment benchmarks of large language model systems

T Cui, Y Wang, C Fu, Y **ao, S Li, X Deng, Y Liu… - arxiv preprint arxiv …, 2024 - arxiv.org
Large language models (LLMs) have strong capabilities in solving diverse natural language
processing tasks. However, the safety and security issues of LLM systems have become the …

Zero-resource hallucination prevention for large language models

J Luo, C **ao, F Ma - arxiv preprint arxiv:2309.02654, 2023 - arxiv.org
The prevalent use of large language models (LLMs) in various domains has drawn attention
to the issue of" hallucination," which refers to instances where LLMs generate factually …