Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Continual lifelong learning in natural language processing: A survey
Continual learning (CL) aims to enable information systems to learn from a continuous data
stream across time. However, it is difficult for existing deep learning architectures to learn a …
stream across time. However, it is difficult for existing deep learning architectures to learn a …
A survey on large language models with multilingualism: Recent advances and new frontiers
The rapid development of Large Language Models (LLMs) demonstrates remarkable
multilingual capabilities in natural language processing, attracting global attention in both …
multilingual capabilities in natural language processing, attracting global attention in both …
Parameter-efficient transfer learning with diff pruning
While task-specific finetuning of pretrained networks has led to significant empirical
advances in NLP, the large size of networks makes finetuning difficult to deploy in multi-task …
advances in NLP, the large size of networks makes finetuning difficult to deploy in multi-task …
Simple, scalable adaptation for neural machine translation
Fine-tuning pre-trained Neural Machine Translation (NMT) models is the dominant approach
for adapting to new languages and domains. However, fine-tuning requires adapting and …
for adapting to new languages and domains. However, fine-tuning requires adapting and …
Recall and learn: Fine-tuning deep pretrained language models with less forgetting
Deep pretrained language models have achieved great success in the way of pretraining
first and then fine-tuning. But such a sequential transfer learning paradigm often confronts …
first and then fine-tuning. But such a sequential transfer learning paradigm often confronts …
Explicit inductive bias for transfer learning with convolutional networks
In inductive transfer learning, fine-tuning pre-trained convolutional networks substantially
outperforms training from scratch. When using fine-tuning, the underlying assumption is that …
outperforms training from scratch. When using fine-tuning, the underlying assumption is that …
A survey of domain adaptation for machine translation
Neural machine translation (NMT) is a deep learning based approach for machine
translation, which outperforms traditional statistical machine translation (SMT) and yields the …
translation, which outperforms traditional statistical machine translation (SMT) and yields the …
Neural grammatical error correction systems with unsupervised pre-training on synthetic data
Considerable effort has been made to address the data sparsity problem in neural
grammatical error correction. In this work, we propose a simple and surprisingly effective …
grammatical error correction. In this work, we propose a simple and surprisingly effective …
A constructive prediction of the generalization error across scales
The dependency of the generalization error of neural networks on model and dataset size is
of critical importance both in practice and for understanding the theory of neural networks …
of critical importance both in practice and for understanding the theory of neural networks …
Domain adaptation and multi-domain adaptation for neural machine translation: A survey
D Saunders - Journal of Artificial Intelligence Research, 2022 - jair.org
The development of deep learning techniques has allowed Neural Machine Translation
(NMT) models to become extremely powerful, given sufficient training data and training time …
(NMT) models to become extremely powerful, given sufficient training data and training time …