Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Recent advances in natural language processing via large pre-trained language models: A survey
Large, pre-trained language models (PLMs) such as BERT and GPT have drastically
changed the Natural Language Processing (NLP) field. For numerous NLP tasks …
changed the Natural Language Processing (NLP) field. For numerous NLP tasks …
A comprehensive survey on relation extraction: Recent advances and new frontiers
Relation extraction (RE) involves identifying the relations between entities from underlying
content. RE serves as the foundation for many natural language processing (NLP) and …
content. RE serves as the foundation for many natural language processing (NLP) and …
P-tuning v2: Prompt tuning can be comparable to fine-tuning universally across scales and tasks
Prompt tuning, which only tunes continuous prompts with a frozen language model,
substantially reduces per-task storage and memory usage at training. However, in the …
substantially reduces per-task storage and memory usage at training. However, in the …
Knowledge graph-enhanced molecular contrastive learning with functional prompt
Deep learning models can accurately predict molecular properties and help making the
search for potential drug candidates faster and more efficient. Many existing methods are …
search for potential drug candidates faster and more efficient. Many existing methods are …
Llmaaa: Making large language models as active annotators
Prevalent supervised learning methods in natural language processing (NLP) are
notoriously data-hungry, which demand large amounts of high-quality annotated data. In …
notoriously data-hungry, which demand large amounts of high-quality annotated data. In …
DeepStruct: Pretraining of language models for structure prediction
We introduce a method for improving the structural understanding abilities of language
models. Unlike previous approaches that finetune the models with task-specific …
models. Unlike previous approaches that finetune the models with task-specific …
Universal information extraction as unified semantic matching
The challenge of information extraction (IE) lies in the diversity of label schemas and the
heterogeneity of structures. Traditional methods require task-specific model design and rely …
heterogeneity of structures. Traditional methods require task-specific model design and rely …
Augmenting low-resource text classification with graph-grounded pre-training and prompting
Text classification is a fundamental problem in information retrieval with many real-world
applications, such as predicting the topics of online articles and the categories of e …
applications, such as predicting the topics of online articles and the categories of e …
Revisiting large language models as zero-shot relation extractors
Relation extraction (RE) consistently involves a certain degree of labeled or unlabeled data
even if under zero-shot setting. Recent studies have shown that large language models …
even if under zero-shot setting. Recent studies have shown that large language models …
Consistency guided knowledge retrieval and denoising in llms for zero-shot document-level relation triplet extraction
Document-level Relation Triplet Extraction (DocRTE) is a fundamental task in information
systems that aims to simultaneously extract entities with semantic relations from a document …
systems that aims to simultaneously extract entities with semantic relations from a document …