Information retrieval: recent advances and beyond

KA Hambarde, H Proenca - IEEE Access, 2023 - ieeexplore.ieee.org
This paper provides an extensive and thorough overview of the models and techniques
utilized in the first and second stages of the typical information retrieval processing chain …

[書籍][B] Pretrained transformers for text ranking: Bert and beyond

J Lin, R Nogueira, A Yates - 2022 - books.google.com
The goal of text ranking is to generate an ordered list of texts retrieved from a corpus in
response to a query. Although the most common formulation of text ranking is search …

Parade: Passage representation aggregation fordocument reranking

C Li, A Yates, S MacAvaney, B He, Y Sun - ACM Transactions on …, 2023 - dl.acm.org
Pre-trained transformer models, such as BERT and T5, have shown to be highly effective at
ad hoc passage and document ranking. Due to the inherent sequence length limits of these …

Ms marco: Benchmarking ranking models in the large-data regime

N Craswell, B Mitra, E Yilmaz, D Campos… - proceedings of the 44th …, 2021 - dl.acm.org
Evaluation efforts such as TREC, CLEF, NTCIR and FIRE, alongside public leaderboard
such as MS MARCO, are intended to encourage research and track our progress …

An improved deep neural network for small-ship detection in SAR imagery

B Hu, H Miao - IEEE Journal of Selected Topics in Applied …, 2023 - ieeexplore.ieee.org
Ship detection by using remote-sensing images based on a synthetic aperture radar (SAR)
plays an important role in managing water transportation and marine safety. However …

Revisiting bag of words document representations for efficient ranking with transformers

D Rau, M Dehghani, J Kamps - ACM Transactions on Information …, 2024 - dl.acm.org
Modern transformer-based information retrieval models achieve state-of-the-art performance
across various benchmarks. The self-attention of the transformer models is a powerful …

The power of selecting key blocks with local pre-ranking for long document information retrieval

M Li, DN Popa, J Chagnon, YG Cinar… - ACM Transactions on …, 2023 - dl.acm.org
On a wide range of natural language processing and information retrieval tasks, transformer-
based models, particularly pre-trained language models like BERT, have demonstrated …

Lightweight composite re-ranking for efficient keyword search with BERT

Y Yang, Y Qiao, J Shao, X Yan, T Yang - Proceedings of the Fifteenth …, 2022 - dl.acm.org
Recently transformer-based ranking models have been shown to deliver high relevance for
document search and the relevance-efficiency tradeoff becomes important for fast query …

Long document re-ranking with modular re-ranker

L Gao, J Callan - Proceedings of the 45th International ACM SIGIR …, 2022 - dl.acm.org
Long document re-ranking has been a challenging problem for neural re-rankers based on
deep language models like BERT. Early work breaks the documents into short passage-like …

Investigating the effects of sparse attention on cross-encoders

F Schlatt, M Fröbe, M Hagen - European Conference on Information …, 2024 - Springer
Cross-encoders are effective passage and document re-rankers but less efficient than other
neural or classic retrieval models. A few previous studies have applied windowed self …