[PDF][PDF] Retrieval-augmented generation for large language models: A survey

Y Gao, Y **ong, X Gao, K Jia, J Pan, Y Bi… - arxiv preprint arxiv …, 2023 - simg.baai.ac.cn
Large language models (LLMs) demonstrate powerful capabilities, but they still face
challenges in practical applications, such as hallucinations, slow knowledge updates, and …

A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions

L Huang, W Yu, W Ma, W Zhong, Z Feng… - ACM Transactions on …, 2025 - dl.acm.org
The emergence of large language models (LLMs) has marked a significant breakthrough in
natural language processing (NLP), fueling a paradigm shift in information acquisition …

Large language models for information retrieval: A survey

Y Zhu, H Yuan, S Wang, J Liu, W Liu, C Deng… - arxiv preprint arxiv …, 2023 - arxiv.org
As a primary means of information acquisition, information retrieval (IR) systems, such as
search engines, have integrated themselves into our daily lives. These systems also serve …

Personal llm agents: Insights and survey about the capability, efficiency and security

Y Li, H Wen, W Wang, X Li, Y Yuan, G Liu, J Liu… - arxiv preprint arxiv …, 2024 - arxiv.org
Since the advent of personal computing devices, intelligent personal assistants (IPAs) have
been one of the key technologies that researchers and engineers have focused on, aiming …

Large legal fictions: Profiling legal hallucinations in large language models

M Dahl, V Magesh, M Suzgun… - Journal of Legal Analysis, 2024 - academic.oup.com
Do large language models (LLMs) know the law? LLMs are increasingly being used to
augment legal practice, education, and research, yet their revolutionary potential is …

Crud-rag: A comprehensive chinese benchmark for retrieval-augmented generation of large language models

Y Lyu, Z Li, S Niu, F **ong, B Tang, W Wang… - ACM Transactions on …, 2025 - dl.acm.org
Retrieval-augmented generation (RAG) is a technique that enhances the capabilities of
large language models (LLMs) by incorporating external knowledge sources. This method …

Don't Hallucinate, Abstain: Identifying LLM Knowledge Gaps via Multi-LLM Collaboration

S Feng, W Shi, Y Wang, W Ding… - arxiv preprint arxiv …, 2024 - arxiv.org
Despite efforts to expand the knowledge of large language models (LLMs), knowledge gaps-
-missing or outdated information in LLMs--might always persist given the evolving nature of …

Dense x retrieval: What retrieval granularity should we use?

T Chen, H Wang, S Chen, W Yu, K Ma… - Proceedings of the …, 2024 - aclanthology.org
Dense retrieval has become a prominent method to obtain relevant context or world
knowledge in open-domain NLP tasks. When we use a learned dense retriever on a …

Rq-rag: Learning to refine queries for retrieval augmented generation

CM Chan, C Xu, R Yuan, H Luo, W Xue, Y Guo… - arxiv preprint arxiv …, 2024 - arxiv.org
Large Language Models (LLMs) exhibit remarkable capabilities but are prone to generating
inaccurate or hallucinatory responses. This limitation stems from their reliance on vast …

Rankrag: Unifying context ranking with retrieval-augmented generation in llms

Y Yu, W **, Z Liu, B Wang, J You… - Advances in …, 2025 - proceedings.neurips.cc
Large language models (LLMs) typically utilize the top-k contexts from a retriever in retrieval-
augmented generation (RAG). In this work, we propose a novel method called RankRAG …