Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Linguistic knowledge and transferability of contextual representations
Contextual word representations derived from large-scale neural language models are
successful across a diverse set of NLP tasks, suggesting that they encode useful and …
successful across a diverse set of NLP tasks, suggesting that they encode useful and …
Natural language processing advancements by deep learning: A survey
Natural Language Processing (NLP) helps empower intelligent machines by enhancing a
better understanding of the human language for linguistic-based human-computer …
better understanding of the human language for linguistic-based human-computer …
Detecting emergent intersectional biases: Contextualized word embeddings contain a distribution of human-like biases
With the starting point that implicit human biases are reflected in the statistical regularities of
language, it is possible to measure biases in English static word embeddings. State-of-the …
language, it is possible to measure biases in English static word embeddings. State-of-the …
Meta-learning approaches for learning-to-learn in deep learning: A survey
Y Tian, X Zhao, W Huang - Neurocomputing, 2022 - Elsevier
Compared to traditional machine learning, deep learning can learn deeper abstract data
representation and understand scattered data properties. It has gained considerable …
representation and understand scattered data properties. It has gained considerable …
DAGA: Data augmentation with a generation approach for low-resource tagging tasks
Data augmentation techniques have been widely used to improve machine learning
performance as they enhance the generalization capability of models. In this work, to …
performance as they enhance the generalization capability of models. In this work, to …
A survey on recent advances in sequence labeling from deep learning models
Sequence labeling (SL) is a fundamental research problem encompassing a variety of tasks,
eg, part-of-speech (POS) tagging, named entity recognition (NER), text chunking, etc …
eg, part-of-speech (POS) tagging, named entity recognition (NER), text chunking, etc …
A survey on narrative extraction from textual data
Narratives are present in many forms of human expression and can be understood as a
fundamental way of communication between people. Computational understanding of the …
fundamental way of communication between people. Computational understanding of the …
CharBERT: Character-aware pre-trained language model
Most pre-trained language models (PLMs) construct word representations at subword level
with Byte-Pair Encoding (BPE) or its variations, by which OOV (out-of-vocab) words are …
with Byte-Pair Encoding (BPE) or its variations, by which OOV (out-of-vocab) words are …
Small and practical BERT models for sequence labeling
We propose a practical scheme to train a single multilingual sequence labeling model that
yields state of the art results and is small and fast enough to run on a single CPU. Starting …
yields state of the art results and is small and fast enough to run on a single CPU. Starting …
A monolingual approach to contextualized word embeddings for mid-resource languages
We use the multilingual OSCAR corpus, extracted from Common Crawl via language
classification, filtering and cleaning, to train monolingual contextualized word embeddings …
classification, filtering and cleaning, to train monolingual contextualized word embeddings …