Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Do llms understand user preferences? evaluating llms on user rating prediction
Large Language Models (LLMs) have demonstrated exceptional capabilities in generalizing
to new tasks in a zero-shot or few-shot manner. However, the extent to which LLMs can …
to new tasks in a zero-shot or few-shot manner. However, the extent to which LLMs can …
RocketQAv2: A joint training method for dense passage retrieval and passage re-ranking
In various natural language processing tasks, passage retrieval and passage re-ranking are
two key procedures in finding and ranking relevant information. Since both the two …
two key procedures in finding and ranking relevant information. Since both the two …
RocketQA: An optimized training approach to dense passage retrieval for open-domain question answering
In open-domain question answering, dense passage retrieval has become a new paradigm
to retrieve relevant passages for finding answers. Typically, the dual-encoder architecture is …
to retrieve relevant passages for finding answers. Typically, the dual-encoder architecture is …
Rankt5: Fine-tuning t5 for text ranking with ranking losses
Pretrained language models such as BERT have been shown to be exceptionally effective
for text ranking. However, there are limited studies on how to leverage more powerful …
for text ranking. However, there are limited studies on how to leverage more powerful …
Optimizing dense retrieval model training with hard negatives
Ranking has always been one of the top concerns in information retrieval researches. For
decades, the lexical matching signal has dominated the ad-hoc retrieval process, but solely …
decades, the lexical matching signal has dominated the ad-hoc retrieval process, but solely …
Explaining answers with entailment trees
Our goal, in the context of open-domain textual question-answering (QA), is to explain
answers by showing the line of reasoning from what is known to the answer, rather than …
answers by showing the line of reasoning from what is known to the answer, rather than …
Adversarial retriever-ranker for dense text retrieval
Current dense text retrieval models face two typical challenges. First, they adopt a siamese
dual-encoder architecture to encode queries and documents independently for fast indexing …
dual-encoder architecture to encode queries and documents independently for fast indexing …
The expando-mono-duo design pattern for text ranking with pretrained sequence-to-sequence models
We propose a design pattern for tackling text ranking problems, dubbed" Expando-Mono-
Duo", that has been empirically validated for a number of ad hoc retrieval tasks in different …
Duo", that has been empirically validated for a number of ad hoc retrieval tasks in different …
Are neural rankers still outperformed by gradient boosted decision trees?
Despite the success of neural models on many major machine learning problems, their
effectiveness on traditional Learning-to-Rank (LTR) problems is still not widely …
effectiveness on traditional Learning-to-Rank (LTR) problems is still not widely …
Injecting the BM25 score as text improves BERT-based re-rankers
In this paper we propose a novel approach for combining first-stage lexical retrieval models
and Transformer-based re-rankers: we inject the relevance score of the lexical model as a …
and Transformer-based re-rankers: we inject the relevance score of the lexical model as a …