Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
A simple but tough-to-beat data augmentation approach for natural language understanding and generation
Adversarial training has been shown effective at endowing the learned representations with
stronger generalization ability. However, it typically requires expensive computation to …
stronger generalization ability. However, it typically requires expensive computation to …
Bert, mbert, or bibert? a study on contextualized embeddings for neural machine translation
The success of bidirectional encoders using masked language models, such as BERT, on
numerous natural language processing tasks has prompted researchers to attempt to …
numerous natural language processing tasks has prompted researchers to attempt to …
BERTTune: Fine-tuning neural machine translation with BERTScore
Neural machine translation models are often biased toward the limited translation
references seen during training. To amend this form of overfitting, in this paper we propose …
references seen during training. To amend this form of overfitting, in this paper we propose …
CipherDAug: Ciphertext based data augmentation for neural machine translation
We propose a novel data-augmentation technique for neural machine translation based on
ROT-$ k $ ciphertexts. ROT-$ k $ is a simple letter substitution cipher that replaces a letter in …
ROT-$ k $ ciphertexts. ROT-$ k $ is a simple letter substitution cipher that replaces a letter in …
Learning multiscale transformer models for sequence generation
Multiscale feature hierarchies have been witnessed the success in the computer vision area.
This further motivates researchers to design multiscale Transformer for natural language …
This further motivates researchers to design multiscale Transformer for natural language …
Bi-simcut: A simple strategy for boosting neural machine translation
We introduce Bi-SimCut: a simple but effective training strategy to boost neural machine
translation (NMT) performance. It consists of two procedures: bidirectional pretraining and …
translation (NMT) performance. It consists of two procedures: bidirectional pretraining and …
Neural hidden markov model for machine translation
Attention-based neural machine translation (NMT) models selectively focus on specific
source positions to produce a translation, which brings significant improvements over pure …
source positions to produce a translation, which brings significant improvements over pure …
TranSFormer: Slow-fast transformer for machine translation
Learning multiscale Transformer models has been evidenced as a viable approach to
augmenting machine translation systems. Prior research has primarily focused on treating …
augmenting machine translation systems. Prior research has primarily focused on treating …
Em-network: Oracle guided self-distillation for sequence learning
We introduce EM-Network, a novel self-distillation approach that effectively leverages target
information for supervised sequence-to-sequence (seq2seq) learning. In contrast to …
information for supervised sequence-to-sequence (seq2seq) learning. In contrast to …
I2R: Intra and inter-modal representation learning for code search
Code search, which locates code snippets in large code repositories based on natural
language queries entered by developers, has become increasingly popular in the software …
language queries entered by developers, has become increasingly popular in the software …