Large language models for information retrieval: A survey

Y Zhu, H Yuan, S Wang, J Liu, W Liu, C Deng… - arxiv preprint arxiv …, 2023 - arxiv.org
As a primary means of information acquisition, information retrieval (IR) systems, such as
search engines, have integrated themselves into our daily lives. These systems also serve …

Dense text retrieval based on pretrained language models: A survey

WX Zhao, J Liu, R Ren, JR Wen - ACM Transactions on Information …, 2024 - dl.acm.org
Text retrieval is a long-standing research topic on information seeking, where a system is
required to return relevant information resources to user's queries in natural language. From …

Large language models are effective text rankers with pairwise ranking prompting

Z Qin, R Jagerman, K Hui, H Zhuang, J Wu… - arxiv preprint arxiv …, 2023 - arxiv.org
Ranking documents using Large Language Models (LLMs) by directly feeding the query and
candidate documents into the prompt is an interesting and practical problem. However …

Rankvicuna: Zero-shot listwise document reranking with open-source large language models

R Pradeep, S Sharifymoghaddam, J Lin - arxiv preprint arxiv:2309.15088, 2023 - arxiv.org
Researchers have successfully applied large language models (LLMs) such as ChatGPT to
reranking in an information retrieval context, but to date, such work has mostly been built on …

A survey on retrieval-augmented text generation for large language models

Y Huang, J Huang - arxiv preprint arxiv:2404.10981, 2024 - arxiv.org
Retrieval-Augmented Generation (RAG) merges retrieval methods with deep learning
advancements to address the static limitations of large language models (LLMs) by enabling …

RankZephyr: Effective and Robust Zero-Shot Listwise Reranking is a Breeze!

R Pradeep, S Sharifymoghaddam, J Lin - arxiv preprint arxiv:2312.02724, 2023 - arxiv.org
In information retrieval, proprietary large language models (LLMs) such as GPT-4 and open-
source counterparts such as LLaMA and Vicuna have played a vital role in reranking …

Inpars-v2: Large language models as efficient dataset generators for information retrieval

V Jeronymo, L Bonifacio, H Abonizio, M Fadaee… - arxiv preprint arxiv …, 2023 - arxiv.org
Recently, InPars introduced a method to efficiently use large language models (LLMs) in
information retrieval tasks: via few-shot examples, an LLM is induced to generate relevant …

Fine-tuning llama for multi-stage text retrieval

X Ma, L Wang, N Yang, F Wei, J Lin - Proceedings of the 47th …, 2024 - dl.acm.org
While large language models (LLMs) have shown impressive NLP capabilities, existing IR
applications mainly focus on prompting LLMs to generate query expansions or generating …

Apeer: Automatic prompt engineering enhances large language model reranking

C **, H Peng, S Zhao, Z Wang, W Xu, L Han… - arxiv preprint arxiv …, 2024 - arxiv.org
Large Language Models (LLMs) have significantly enhanced Information Retrieval (IR)
across various modules, such as reranking. Despite impressive performance, current zero …

Found in the middle: Permutation self-consistency improves listwise ranking in large language models

R Tang, X Zhang, X Ma, J Lin, F Ture - arxiv preprint arxiv:2310.07712, 2023 - arxiv.org
Large language models (LLMs) exhibit positional bias in how they use context, which
especially complicates listwise ranking. To address this, we propose permutation self …