Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Information retrieval: recent advances and beyond
This paper provides an extensive and thorough overview of the models and techniques
utilized in the first and second stages of the typical information retrieval processing chain …
utilized in the first and second stages of the typical information retrieval processing chain …
[書籍][B] Pretrained transformers for text ranking: Bert and beyond
The goal of text ranking is to generate an ordered list of texts retrieved from a corpus in
response to a query. Although the most common formulation of text ranking is search …
response to a query. Although the most common formulation of text ranking is search …
Parade: Passage representation aggregation fordocument reranking
Pre-trained transformer models, such as BERT and T5, have shown to be highly effective at
ad hoc passage and document ranking. Due to the inherent sequence length limits of these …
ad hoc passage and document ranking. Due to the inherent sequence length limits of these …
Ms marco: Benchmarking ranking models in the large-data regime
Evaluation efforts such as TREC, CLEF, NTCIR and FIRE, alongside public leaderboard
such as MS MARCO, are intended to encourage research and track our progress …
such as MS MARCO, are intended to encourage research and track our progress …
An improved deep neural network for small-ship detection in SAR imagery
B Hu, H Miao - IEEE Journal of Selected Topics in Applied …, 2023 - ieeexplore.ieee.org
Ship detection by using remote-sensing images based on a synthetic aperture radar (SAR)
plays an important role in managing water transportation and marine safety. However …
plays an important role in managing water transportation and marine safety. However …
Revisiting bag of words document representations for efficient ranking with transformers
Modern transformer-based information retrieval models achieve state-of-the-art performance
across various benchmarks. The self-attention of the transformer models is a powerful …
across various benchmarks. The self-attention of the transformer models is a powerful …
The power of selecting key blocks with local pre-ranking for long document information retrieval
On a wide range of natural language processing and information retrieval tasks, transformer-
based models, particularly pre-trained language models like BERT, have demonstrated …
based models, particularly pre-trained language models like BERT, have demonstrated …
Lightweight composite re-ranking for efficient keyword search with BERT
Recently transformer-based ranking models have been shown to deliver high relevance for
document search and the relevance-efficiency tradeoff becomes important for fast query …
document search and the relevance-efficiency tradeoff becomes important for fast query …
Long document re-ranking with modular re-ranker
L Gao, J Callan - Proceedings of the 45th International ACM SIGIR …, 2022 - dl.acm.org
Long document re-ranking has been a challenging problem for neural re-rankers based on
deep language models like BERT. Early work breaks the documents into short passage-like …
deep language models like BERT. Early work breaks the documents into short passage-like …
Investigating the effects of sparse attention on cross-encoders
Cross-encoders are effective passage and document re-rankers but less efficient than other
neural or classic retrieval models. A few previous studies have applied windowed self …
neural or classic retrieval models. A few previous studies have applied windowed self …