Unleashing the potential of prompt engineering in large language models: a comprehensive review

B Chen, Z Zhang, N Langrené, S Zhu - arxiv preprint arxiv:2310.14735, 2023 - arxiv.org
This comprehensive review delves into the pivotal role of prompt engineering in unleashing
the capabilities of Large Language Models (LLMs). The development of Artificial Intelligence …

Survey on factuality in large language models: Knowledge, retrieval and domain-specificity

C Wang, X Liu, Y Yue, X Tang, T Zhang… - arxiv preprint arxiv …, 2023 - arxiv.org
This survey addresses the crucial issue of factuality in Large Language Models (LLMs). As
LLMs find applications across diverse domains, the reliability and accuracy of their outputs …

Physics of language models: Part 3.1, knowledge storage and extraction

Z Allen-Zhu, Y Li - arxiv preprint arxiv:2309.14316, 2023 - arxiv.org
Large language models (LLMs) can store a vast amount of world knowledge, often
extractable via question-answering (eg," What is Abraham Lincoln's birthday?"). However …

Does fine-tuning LLMs on new knowledge encourage hallucinations?

Z Gekhman, G Yona, R Aharoni, M Eyal… - arxiv preprint arxiv …, 2024 - arxiv.org
When large language models are aligned via supervised fine-tuning, they may encounter
new factual information that was not acquired through pre-training. It is often conjectured that …

Don't Hallucinate, Abstain: Identifying LLM Knowledge Gaps via Multi-LLM Collaboration

S Feng, W Shi, Y Wang, W Ding… - arxiv preprint arxiv …, 2024 - arxiv.org
Despite efforts to expand the knowledge of large language models (LLMs), knowledge gaps-
-missing or outdated information in LLMs--might always persist given the evolving nature of …

Knowledge conflicts for llms: A survey

R Xu, Z Qi, Z Guo, C Wang, H Wang, Y Zhang… - arxiv preprint arxiv …, 2024 - arxiv.org
This survey provides an in-depth analysis of knowledge conflicts for large language models
(LLMs), highlighting the complex challenges they encounter when blending contextual and …

Factuality challenges in the era of large language models and opportunities for fact-checking

I Augenstein, T Baldwin, M Cha… - Nature Machine …, 2024 - nature.com
The emergence of tools based on large language models (LLMs), such as OpenAI's
ChatGPT and Google's Gemini, has garnered immense public attention owing to their …

Usable XAI: 10 strategies towards exploiting explainability in the LLM era

X Wu, H Zhao, Y Zhu, Y Shi, F Yang, T Liu… - arxiv preprint arxiv …, 2024 - arxiv.org
Explainable AI (XAI) refers to techniques that provide human-understandable insights into
the workings of AI models. Recently, the focus of XAI is being extended towards Large …

Cutting off the head ends the conflict: A mechanism for interpreting and mitigating knowledge conflicts in language models

Z **, P Cao, H Yuan, Y Chen, J Xu, H Li… - arxiv preprint arxiv …, 2024 - arxiv.org
Recently, retrieval augmentation and tool augmentation have demonstrated a remarkable
capability to expand the internal memory boundaries of language models (LMs) by providing …

Dell: Generating reactions and explanations for llm-based misinformation detection

H Wan, S Feng, Z Tan, H Wang, Y Tsvetkov… - arxiv preprint arxiv …, 2024 - arxiv.org
Large language models are limited by challenges in factuality and hallucinations to be
directly employed off-the-shelf for judging the veracity of news articles, where factual …