Do llms understand user preferences? evaluating llms on user rating prediction

WC Kang, J Ni, N Mehta, M Sathiamoorthy… - arxiv preprint arxiv …, 2023 - arxiv.org
Large Language Models (LLMs) have demonstrated exceptional capabilities in generalizing
to new tasks in a zero-shot or few-shot manner. However, the extent to which LLMs can …

RocketQAv2: A joint training method for dense passage retrieval and passage re-ranking

R Ren, Y Qu, J Liu, WX Zhao, Q She, H Wu… - arxiv preprint arxiv …, 2021 - arxiv.org
In various natural language processing tasks, passage retrieval and passage re-ranking are
two key procedures in finding and ranking relevant information. Since both the two …

RocketQA: An optimized training approach to dense passage retrieval for open-domain question answering

Y Qu, Y Ding, J Liu, K Liu, R Ren, WX Zhao… - arxiv preprint arxiv …, 2020 - arxiv.org
In open-domain question answering, dense passage retrieval has become a new paradigm
to retrieve relevant passages for finding answers. Typically, the dual-encoder architecture is …

Rankt5: Fine-tuning t5 for text ranking with ranking losses

H Zhuang, Z Qin, R Jagerman, K Hui, J Ma… - Proceedings of the 46th …, 2023 - dl.acm.org
Pretrained language models such as BERT have been shown to be exceptionally effective
for text ranking. However, there are limited studies on how to leverage more powerful …

Optimizing dense retrieval model training with hard negatives

J Zhan, J Mao, Y Liu, J Guo, M Zhang… - Proceedings of the 44th …, 2021 - dl.acm.org
Ranking has always been one of the top concerns in information retrieval researches. For
decades, the lexical matching signal has dominated the ad-hoc retrieval process, but solely …

Explaining answers with entailment trees

B Dalvi, P Jansen, O Tafjord, Z **e, H Smith… - arxiv preprint arxiv …, 2021 - arxiv.org
Our goal, in the context of open-domain textual question-answering (QA), is to explain
answers by showing the line of reasoning from what is known to the answer, rather than …

Adversarial retriever-ranker for dense text retrieval

H Zhang, Y Gong, Y Shen, J Lv, N Duan… - arxiv preprint arxiv …, 2021 - arxiv.org
Current dense text retrieval models face two typical challenges. First, they adopt a siamese
dual-encoder architecture to encode queries and documents independently for fast indexing …

The expando-mono-duo design pattern for text ranking with pretrained sequence-to-sequence models

R Pradeep, R Nogueira, J Lin - arxiv preprint arxiv:2101.05667, 2021 - arxiv.org
We propose a design pattern for tackling text ranking problems, dubbed" Expando-Mono-
Duo", that has been empirically validated for a number of ad hoc retrieval tasks in different …

Are neural rankers still outperformed by gradient boosted decision trees?

Z Qin, L Yan, H Zhuang, Y Tay… - International …, 2021 - openreview.net
Despite the success of neural models on many major machine learning problems, their
effectiveness on traditional Learning-to-Rank (LTR) problems is still not widely …

Injecting the BM25 score as text improves BERT-based re-rankers

A Askari, A Abolghasemi, G Pasi, W Kraaij… - … on Information Retrieval, 2023 - Springer
In this paper we propose a novel approach for combining first-stage lexical retrieval models
and Transformer-based re-rankers: we inject the relevance score of the lexical model as a …