Introducing neural bag of whole-words with colberter: Contextualized late interactions using enhanced reduction
Recent progress in neural information retrieval has demonstrated large gains in quality,
while often sacrificing efficiency and interpretability compared to classical approaches. We …
while often sacrificing efficiency and interpretability compared to classical approaches. We …
Efficient neural ranking using forward indexes and lightweight encoders
Dual-encoder-based dense retrieval models have become the standard in IR. They employ
large Transformer-based language models, which are notoriously inefficient in terms of …
large Transformer-based language models, which are notoriously inefficient in terms of …
On the interpolation of contextualized term-based ranking with bm25 for query-by-example retrieval
Term-based ranking with pre-trained transformer-based language models has recently
gained attention as they bring the contextualization power of transformer models into the …
gained attention as they bring the contextualization power of transformer models into the …
Efficient neural ranking using forward indexes
Neural document ranking approaches, specifically transformer models, have achieved
impressive gains in ranking performance. However, query processing using such over …
impressive gains in ranking performance. However, query processing using such over …