Factuality challenges in the era of large language models and opportunities for fact-checking
The emergence of tools based on large language models (LLMs), such as OpenAI's
ChatGPT and Google's Gemini, has garnered immense public attention owing to their …
ChatGPT and Google's Gemini, has garnered immense public attention owing to their …
Don't Hallucinate, Abstain: Identifying LLM Knowledge Gaps via Multi-LLM Collaboration
Despite efforts to expand the knowledge of large language models (LLMs), knowledge gaps-
-missing or outdated information in LLMs--might always persist given the evolving nature of …
-missing or outdated information in LLMs--might always persist given the evolving nature of …
Speculative rag: Enhancing retrieval augmented generation through drafting
Retrieval augmented generation (RAG) combines the generative abilities of large language
models (LLMs) with external knowledge sources to provide more accurate and up-to-date …
models (LLMs) with external knowledge sources to provide more accurate and up-to-date …
Mitigating hallucination in fictional character role-play
Role-playing has wide-ranging applications in customer support, embodied agents,
computational social science, etc. The influence of parametric world knowledge of large …
computational social science, etc. The influence of parametric world knowledge of large …
Usable XAI: 10 strategies towards exploiting explainability in the LLM era
Explainable AI (XAI) refers to techniques that provide human-understandable insights into
the workings of AI models. Recently, the focus of XAI is being extended towards Large …
the workings of AI models. Recently, the focus of XAI is being extended towards Large …
Defining knowledge: Bridging epistemology and large language models
Knowledge claims are abundant in the literature on large language models (LLMs); but can
we say that GPT-4 truly" knows" the Earth is round? To address this question, we review …
we say that GPT-4 truly" knows" the Earth is round? To address this question, we review …
Dell: Generating reactions and explanations for llm-based misinformation detection
Large language models are limited by challenges in factuality and hallucinations to be
directly employed off-the-shelf for judging the veracity of news articles, where factual …
directly employed off-the-shelf for judging the veracity of news articles, where factual …