Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
A survey of the usages of deep learning for natural language processing
Over the last several years, the field of natural language processing has been propelled
forward by an explosion in the use of deep learning models. This article provides a brief …
forward by an explosion in the use of deep learning models. This article provides a brief …
Universal dependencies
Universal dependencies (UD) is a framework for morphosyntactic annotation of human
language, which to date has been used to create treebanks for more than 100 languages. In …
language, which to date has been used to create treebanks for more than 100 languages. In …
Universal Dependencies v2: An evergrowing multilingual treebank collection
Universal Dependencies is an open community effort to create cross-linguistically consistent
treebank annotation for many languages within a dependency-based lexicalist framework …
treebank annotation for many languages within a dependency-based lexicalist framework …
How multilingual is multilingual BERT?
In this paper, we show that Multilingual BERT (M-BERT), released by Devlin et al.(2018) as
a single language model pre-trained from monolingual corpora in 104 languages, is …
a single language model pre-trained from monolingual corpora in 104 languages, is …
FLAIR: An easy-to-use framework for state-of-the-art NLP
We present FLAIR, an NLP framework designed to facilitate training and distribution of state-
of-the-art sequence labeling, text classification and language models. The core idea of the …
of-the-art sequence labeling, text classification and language models. The core idea of the …
Machine learning for ancient languages: A survey
Ancient languages preserve the cultures and histories of the past. However, their study is
fraught with difficulties, and experts must tackle a range of challenging text-based tasks, from …
fraught with difficulties, and experts must tackle a range of challenging text-based tasks, from …
IndoLEM and IndoBERT: A benchmark dataset and pre-trained language model for Indonesian NLP
Although the Indonesian language is spoken by almost 200 million people and the 10th
most spoken language in the world, it is under-represented in NLP research. Previous work …
most spoken language in the world, it is under-represented in NLP research. Previous work …
Multilingual is not enough: BERT for Finnish
Deep learning-based language models pretrained on large unannotated text corpora have
been demonstrated to allow efficient transfer learning for natural language processing, with …
been demonstrated to allow efficient transfer learning for natural language processing, with …
Tokenizing, pos tagging, lemmatizing and parsing ud 2.0 with udpipe
Many natural language processing tasks, including the most advanced ones, routinely start
by several basic processing steps–tokenization and segmentation, most likely also POS …
by several basic processing steps–tokenization and segmentation, most likely also POS …
Linguistically-informed self-attention for semantic role labeling
Current state-of-the-art semantic role labeling (SRL) uses a deep neural network with no
explicit linguistic features. However, prior work has shown that gold syntax trees can …
explicit linguistic features. However, prior work has shown that gold syntax trees can …