Retrieval-augmented generation for large language models: A survey

Y Gao, Y **ong, X Gao, K Jia, J Pan, Y Bi, Y Dai… - arxiv preprint arxiv …, 2023‏ - arxiv.org
Large language models (LLMs) demonstrate powerful capabilities, but they still face
challenges in practical applications, such as hallucinations, slow knowledge updates, and …

Large language models for information retrieval: A survey

Y Zhu, H Yuan, S Wang, J Liu, W Liu, C Deng… - arxiv preprint arxiv …, 2023‏ - arxiv.org
As a primary means of information acquisition, information retrieval (IR) systems, such as
search engines, have integrated themselves into our daily lives. These systems also serve …

Large legal fictions: Profiling legal hallucinations in large language models

M Dahl, V Magesh, M Suzgun… - Journal of Legal Analysis, 2024‏ - academic.oup.com
Do large language models (LLMs) know the law? LLMs are increasingly being used to
augment legal practice, education, and research, yet their revolutionary potential is …

Crud-rag: A comprehensive chinese benchmark for retrieval-augmented generation of large language models

Y Lyu, Z Li, S Niu, F **ong, B Tang, W Wang… - ACM Transactions on …, 2024‏ - dl.acm.org
Retrieval-Augmented Generation (RAG) is a technique that enhances the capabilities of
large language models (LLMs) by incorporating external knowledge sources. This method …

Dense x retrieval: What retrieval granularity should we use?

T Chen, H Wang, S Chen, W Yu, K Ma, X Zhao… - arxiv preprint arxiv …, 2023‏ - arxiv.org
Dense retrieval has become a prominent method to obtain relevant context or world
knowledge in open-domain NLP tasks. When we use a learned dense retriever on a …

Personal llm agents: Insights and survey about the capability, efficiency and security

Y Li, H Wen, W Wang, X Li, Y Yuan, G Liu, J Liu… - arxiv preprint arxiv …, 2024‏ - arxiv.org
Since the advent of personal computing devices, intelligent personal assistants (IPAs) have
been one of the key technologies that researchers and engineers have focused on, aiming …

DALK: Dynamic Co-Augmentation of LLMs and KG to answer Alzheimer's Disease Questions with Scientific Literature

D Li, S Yang, Z Tan, JY Baik, S Yun, J Lee… - arxiv preprint arxiv …, 2024‏ - arxiv.org
Recent advancements in large language models (LLMs) have achieved promising
performances across various applications. Nonetheless, the ongoing challenge of …

Generate-then-ground in retrieval-augmented generation for multi-hop question answering

Z Shi, W Sun, S Gao, P Ren, Z Chen, Z Ren - arxiv preprint arxiv …, 2024‏ - arxiv.org
Multi-Hop Question Answering (MHQA) tasks present a significant challenge for large
language models (LLMs) due to the intensive knowledge required. Current solutions, like …

Astute rag: Overcoming imperfect retrieval augmentation and knowledge conflicts for large language models

F Wang, X Wan, R Sun, J Chen, SÖ Arık - arxiv preprint arxiv:2410.07176, 2024‏ - arxiv.org
Retrieval-Augmented Generation (RAG), while effective in integrating external knowledge to
address the limitations of large language models (LLMs), can be undermined by imperfect …

Rankrag: Unifying context ranking with retrieval-augmented generation in llms

Y Yu, W **, Z Liu, B Wang, J You, C Zhang… - arxiv preprint arxiv …, 2024‏ - arxiv.org
Large language models (LLMs) typically utilize the top-k contexts from a retriever in retrieval-
augmented generation (RAG). In this work, we propose a novel instruction fine-tuning …