Retrieval-augmented generation for large language models: A survey

Y Gao, Y **ong, X Gao, K Jia, J Pan, Y Bi, Y Dai… - arxiv preprint arxiv …, 2023 - arxiv.org
Large language models (LLMs) demonstrate powerful capabilities, but they still face
challenges in practical applications, such as hallucinations, slow knowledge updates, and …

Surveying the mllm landscape: A meta-review of current surveys

M Li, K Chen, Z Bi, M Liu, B Peng, Q Niu, J Liu… - arxiv preprint arxiv …, 2024 - arxiv.org
The rise of Multimodal Large Language Models (MLLMs) has become a transformative force
in the field of artificial intelligence, enabling machines to process and generate content …

Knowledge conflicts for llms: A survey

R Xu, Z Qi, Z Guo, C Wang, H Wang, Y Zhang… - arxiv preprint arxiv …, 2024 - arxiv.org
This survey provides an in-depth analysis of knowledge conflicts for large language models
(LLMs), highlighting the complex challenges they encounter when blending contextual and …

Boosting conversational question answering with fine-grained retrieval-augmentation and self-check

L Ye, Z Lei, J Yin, Q Chen, J Zhou, L He - Proceedings of the 47th …, 2024 - dl.acm.org
Retrieval-Augmented Generation (RAG) aims to generate more reliable and accurate
responses, by augmenting large language models (LLMs) with the external vast and …

Retrieval-generation synergy augmented large language models

Z Feng, X Feng, D Zhao, M Yang… - ICASSP 2024-2024 IEEE …, 2024 - ieeexplore.ieee.org
Large language models augmented with task-relevant documents have demonstrated
impressive performance on knowledge-intensive tasks. However, regarding how to obtain …

Editing factual knowledge and explanatory ability of medical large language models

D Xu, Z Zhang, Z Zhu, Z Lin, Q Liu, X Wu, T Xu… - Proceedings of the 33rd …, 2024 - dl.acm.org
Model editing aims to precisely alter the behaviors of large language models (LLMs) in
relation to specific knowledge, while leaving unrelated knowledge intact. This approach has …

[PDF][PDF] Loramoe: Revolutionizing mixture of experts for maintaining world knowledge in language model alignment

S Dou, E Zhou, Y Liu, S Gao, J Zhao… - arxiv preprint arxiv …, 2023 - simg.baai.ac.cn
Supervised fine-tuning (SFT) is a crucial step for large language models (LLMs), enabling
them to align with human instructions and enhance their capabilities in downstream tasks …

Knowledge editing on black-box large language models

X Song, Z Wang, K He, G Dong, Y Mou, J Zhao… - arxiv preprint arxiv …, 2024 - arxiv.org
Knowledge editing (KE) aims to efficiently and precisely modify the behavior of large
language models (LLMs) to update specific knowledge without negatively influencing other …

Knowagent: Knowledge-augmented planning for llm-based agents

Y Zhu, S Qiao, Y Ou, S Deng, N Zhang, S Lyu… - arxiv preprint arxiv …, 2024 - arxiv.org
Large Language Models (LLMs) have demonstrated great potential in complex reasoning
tasks, yet they fall short when tackling more sophisticated challenges, especially when …

A survey on the memory mechanism of large language model based agents

Z Zhang, X Bo, C Ma, R Li, X Chen, Q Dai, J Zhu… - arxiv preprint arxiv …, 2024 - arxiv.org
Large language model (LLM) based agents have recently attracted much attention from the
research and industry communities. Compared with original LLMs, LLM-based agents are …