Combating misinformation in the age of llms: Opportunities and challenges

C Chen, K Shu - AI Magazine, 2024 - Wiley Online Library
Misinformation such as fake news and rumors is a serious threat for information ecosystems
and public trust. The emergence of large language models (LLMs) has great potential to …

Cognitive mirage: A review of hallucinations in large language models

H Ye, T Liu, A Zhang, W Hua, W Jia - ar** trustworthy retrieval-augmented language models
C Niu, Y Wu, J Zhu, S Xu, K Shum, R Zhong… - arxiv preprint arxiv …, 2023 - arxiv.org
Retrieval-augmented generation (RAG) has become a main technique for alleviating
hallucinations in large language models (LLMs). Despite the integration of RAG, LLMs may …

Calibrated language models must hallucinate

AT Kalai, SS Vempala - Proceedings of the 56th Annual ACM …, 2024 - dl.acm.org
Recent language models generate false but plausible-sounding text with surprising
frequency. Such “hallucinations” are an obstacle to the usability of language-based AI …

Towards trustworthy LLMs: a review on debiasing and dehallucinating in large language models

Z Lin, S Guan, W Zhang, H Zhang, Y Li… - Artificial Intelligence …, 2024 - Springer
Recently, large language models (LLMs) have attracted considerable attention due to their
remarkable capabilities. However, LLMs' generation of biased or hallucinatory content …