Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Evolution of semantic similarity—a survey
Estimating the semantic similarity between text data is one of the challenging and open
research problems in the field of Natural Language Processing (NLP). The versatility of …
research problems in the field of Natural Language Processing (NLP). The versatility of …
C-pack: Packed resources for general chinese embeddings
We introduce C-Pack, a package of resources that significantly advances the field of general
text embeddings for Chinese. C-Pack includes three critical resources. 1) C-MTP is a …
text embeddings for Chinese. C-Pack includes three critical resources. 1) C-MTP is a …
MTEB: Massive text embedding benchmark
Text embeddings are commonly evaluated on a small set of datasets from a single task not
covering their possible applications to other tasks. It is unclear whether state-of-the-art …
covering their possible applications to other tasks. It is unclear whether state-of-the-art …
Angle-optimized text embeddings
High-quality text embedding is pivotal in improving semantic textual similarity (STS) tasks,
which are crucial components in Large Language Model (LLM) applications. However, a …
which are crucial components in Large Language Model (LLM) applications. However, a …
[HTML][HTML] The 2019 n2c2/ohnlp track on clinical semantic textual similarity: overview
Background: Semantic textual similarity is a common task in the general English domain to
assess the degree to which the underlying semantics of 2 text segments are equivalent to …
assess the degree to which the underlying semantics of 2 text segments are equivalent to …
Sentence-t5: Scalable sentence encoders from pre-trained text-to-text models
We provide the first exploration of sentence embeddings from text-to-text transformers (T5).
Sentence embeddings are broadly useful for language processing tasks. While T5 achieves …
Sentence embeddings are broadly useful for language processing tasks. While T5 achieves …
Simcse: Simple contrastive learning of sentence embeddings
This paper presents SimCSE, a simple contrastive learning framework that greatly advances
state-of-the-art sentence embeddings. We first describe an unsupervised approach, which …
state-of-the-art sentence embeddings. We first describe an unsupervised approach, which …
Consert: A contrastive framework for self-supervised sentence representation transfer
Learning high-quality sentence representations benefits a wide range of natural language
processing tasks. Though BERT-based pre-trained language models achieve high …
processing tasks. Though BERT-based pre-trained language models achieve high …
On the sentence embeddings from pre-trained language models
Pre-trained contextual representations like BERT have achieved great success in natural
language processing. However, the sentence embeddings from the pre-trained language …
language processing. However, the sentence embeddings from the pre-trained language …
Whitening sentence representations for better semantics and faster retrieval
Pre-training models such as BERT have achieved great success in many natural language
processing tasks. However, how to obtain better sentence representation through these pre …
processing tasks. However, how to obtain better sentence representation through these pre …