Retrieval-augmented generation for natural language processing: A survey
Large language models (LLMs) have demonstrated great success in various fields,
benefiting from their huge amount of parameters that store knowledge. However, LLMs still …
benefiting from their huge amount of parameters that store knowledge. However, LLMs still …
Fine-tuning image transformers using learnable memory
M Sandler, A Zhmoginov… - Proceedings of the …, 2022 - openaccess.thecvf.com
In this paper we propose augmenting Vision Transformer models with learnable memory
tokens. Our approach allows the model to adapt to new tasks, using few parameters, while …
tokens. Our approach allows the model to adapt to new tasks, using few parameters, while …
Graphreader: Building graph-based agent to enhance long-context abilities of large language models
Long-context capabilities are essential for large language models (LLMs) to tackle complex
and long-input tasks. Despite numerous efforts made to optimize LLMs for long contexts …
and long-input tasks. Despite numerous efforts made to optimize LLMs for long contexts …