Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
On the linguistic representational power of neural machine translation models
Despite the recent success of deep neural networks in natural language processing and
other spheres of artificial intelligence, their interpretability remains a challenge. We analyze …
other spheres of artificial intelligence, their interpretability remains a challenge. We analyze …
Are all languages created equal in multilingual BERT?
Multilingual BERT (mBERT) trained on 104 languages has shown surprisingly good cross-
lingual performance on several NLP tasks, even without explicit cross-lingual signals …
lingual performance on several NLP tasks, even without explicit cross-lingual signals …
UDPipe 2.0 prototype at CoNLL 2018 UD shared task
M Straka - Proceedings of the CoNLL 2018 shared task …, 2018 - aclanthology.org
UDPipe is a trainable pipeline which performs sentence segmentation, tokenization, POS
tagging, lemmatization and dependency parsing. We present a prototype for UDPipe 2.0 …
tagging, lemmatization and dependency parsing. We present a prototype for UDPipe 2.0 …
[PDF][PDF] JW300: A wide-coverage parallel corpus for low-resource languages
Viable cross-lingual transfer critically depends on the availability of parallel texts. Shortage
of such resources imposes a development and evaluation bottleneck in multilingual …
of such resources imposes a development and evaluation bottleneck in multilingual …
Small and practical BERT models for sequence labeling
We propose a practical scheme to train a single multilingual sequence labeling model that
yields state of the art results and is small and fast enough to run on a single CPU. Starting …
yields state of the art results and is small and fast enough to run on a single CPU. Starting …
A primer on pretrained multilingual language models
Multilingual Language Models (\MLLMs) such as mBERT, XLM, XLM-R,\textit {etc.} have
emerged as a viable option for bringing the power of pretraining to a large number of …
emerged as a viable option for bringing the power of pretraining to a large number of …
English intermediate-task training improves zero-shot cross-lingual transfer too
Intermediate-task training---fine-tuning a pretrained model on an intermediate task before
fine-tuning again on the target task---often improves model performance substantially on …
fine-tuning again on the target task---often improves model performance substantially on …
Specializing word embeddings (for parsing) by information bottleneck
Pre-trained word embeddings like ELMo and BERT contain rich syntactic and semantic
information, resulting in state-of-the-art performance on various tasks. We propose a very …
information, resulting in state-of-the-art performance on various tasks. We propose a very …
When is BERT multilingual? isolating crucial ingredients for cross-lingual transfer
While recent work on multilingual language models has demonstrated their capacity for
cross-lingual zero-shot transfer on downstream tasks, there is a lack of consensus in the …
cross-lingual zero-shot transfer on downstream tasks, there is a lack of consensus in the …
Viable dependency parsing as sequence labeling
We recast dependency parsing as a sequence labeling problem, exploring several
encodings of dependency trees as labels. While dependency parsing by means of …
encodings of dependency trees as labels. While dependency parsing by means of …