Survey of vulnerabilities in large language models revealed by adversarial attacks

E Shayegani, MAA Mamun, Y Fu, P Zaree… - arxiv preprint arxiv …, 2023 - arxiv.org
Large Language Models (LLMs) are swiftly advancing in architecture and capability, and as
they integrate more deeply into complex systems, the urgency to scrutinize their security …

A survey on rag meeting llms: Towards retrieval-augmented large language models

W Fan, Y Ding, L Ning, S Wang, H Li, D Yin… - Proceedings of the 30th …, 2024 - dl.acm.org
As one of the most advanced techniques in AI, Retrieval-Augmented Generation (RAG) can
offer reliable and up-to-date external knowledge, providing huge convenience for numerous …

Retrieval-augmented generation for natural language processing: A survey

S Wu, Y **ong, Y Cui, H Wu, C Chen, Y Yuan… - arxiv preprint arxiv …, 2024 - arxiv.org
Large language models (LLMs) have demonstrated great success in various fields,
benefiting from their huge amount of parameters that store knowledge. However, LLMs still …

Efficient large language models: A survey

Z Wan, X Wang, C Liu, S Alam, Y Zheng, J Liu… - arxiv preprint arxiv …, 2023 - arxiv.org
Large Language Models (LLMs) have demonstrated remarkable capabilities in important
tasks such as natural language understanding and language generation, and thus have the …

A systematic survey of prompt engineering on vision-language foundation models

J Gu, Z Han, S Chen, A Beirami, B He, G Zhang… - arxiv preprint arxiv …, 2023 - arxiv.org
Prompt engineering is a technique that involves augmenting a large pre-trained model with
task-specific hints, known as prompts, to adapt the model to new tasks. Prompts can be …

Label words are anchors: An information flow perspective for understanding in-context learning

L Wang, L Li, D Dai, D Chen, H Zhou, F Meng… - arxiv preprint arxiv …, 2023 - arxiv.org
In-context learning (ICL) emerges as a promising capability of large language models
(LLMs) by providing them with demonstration examples to perform diverse tasks. However …

Exchange-of-thought: Enhancing large language model capabilities through cross-model communication

Z Yin, Q Sun, C Chang, Q Guo, J Dai, X Huang… - arxiv preprint arxiv …, 2023 - arxiv.org
Large Language Models (LLMs) have recently made significant strides in complex
reasoning tasks through the Chain-of-Thought technique. Despite this progress, their …

Learning to retrieve in-context examples for large language models

L Wang, N Yang, F Wei - arxiv preprint arxiv:2307.07164, 2023 - arxiv.org
Large language models (LLMs) have demonstrated their ability to learn in-context, allowing
them to perform various tasks based on a few input-output examples. However, the …

Improving contrastive learning of sentence embeddings from ai feedback

Q Cheng, X Yang, T Sun, L Li, X Qiu - arxiv preprint arxiv:2305.01918, 2023 - arxiv.org
Contrastive learning has become a popular approach in natural language processing,
particularly for the learning of sentence embeddings. However, the discrete nature of natural …

In-context learning with iterative demonstration selection

C Qin, A Zhang, C Chen, A Dagar, W Ye - arxiv preprint arxiv:2310.09881, 2023 - arxiv.org
Spurred by advancements in scale, large language models (LLMs) have demonstrated
strong few-shot learning ability via in-context learning (ICL). However, the performance of …