Factuality challenges in the era of large language models and opportunities for fact-checking

I Augenstein, T Baldwin, M Cha… - Nature Machine …, 2024 - nature.com
The emergence of tools based on large language models (LLMs), such as OpenAI's
ChatGPT and Google's Gemini, has garnered immense public attention owing to their …

Don't Hallucinate, Abstain: Identifying LLM Knowledge Gaps via Multi-LLM Collaboration

S Feng, W Shi, Y Wang, W Ding… - arxiv preprint arxiv …, 2024 - arxiv.org
Despite efforts to expand the knowledge of large language models (LLMs), knowledge gaps-
-missing or outdated information in LLMs--might always persist given the evolving nature of …

Speculative rag: Enhancing retrieval augmented generation through drafting

Z Wang, Z Wang, L Le, HS Zheng, S Mishra… - arxiv preprint arxiv …, 2024 - arxiv.org
Retrieval augmented generation (RAG) combines the generative abilities of large language
models (LLMs) with external knowledge sources to provide more accurate and up-to-date …

Mitigating hallucination in fictional character role-play

N Sadeq, Z **e, B Kang, P Lamba, X Gao… - arxiv preprint arxiv …, 2024 - arxiv.org
Role-playing has wide-ranging applications in customer support, embodied agents,
computational social science, etc. The influence of parametric world knowledge of large …

Usable XAI: 10 strategies towards exploiting explainability in the LLM era

X Wu, H Zhao, Y Zhu, Y Shi, F Yang, T Liu… - arxiv preprint arxiv …, 2024 - arxiv.org
Explainable AI (XAI) refers to techniques that provide human-understandable insights into
the workings of AI models. Recently, the focus of XAI is being extended towards Large …

Defining knowledge: Bridging epistemology and large language models

C Fierro, R Dhar, F Stamatiou, N Garneau… - arxiv preprint arxiv …, 2024 - arxiv.org
Knowledge claims are abundant in the literature on large language models (LLMs); but can
we say that GPT-4 truly" knows" the Earth is round? To address this question, we review …

Dell: Generating reactions and explanations for llm-based misinformation detection

H Wan, S Feng, Z Tan, H Wang, Y Tsvetkov… - arxiv preprint arxiv …, 2024 - arxiv.org
Large language models are limited by challenges in factuality and hallucinations to be
directly employed off-the-shelf for judging the veracity of news articles, where factual …