Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Ammus: A survey of transformer-based pretrained models in natural language processing
KS Kalyan, A Rajasekharan, S Sangeetha - arxiv preprint arxiv …, 2021 - arxiv.org
Transformer-based pretrained language models (T-PTLMs) have achieved great success in
almost every NLP task. The evolution of these models started with GPT and BERT. These …
almost every NLP task. The evolution of these models started with GPT and BERT. These …
[HTML][HTML] Putting gpt-4o to the sword: A comprehensive evaluation of language, vision, speech, and multimodal proficiency
As large language models (LLMs) continue to advance, evaluating their comprehensive
capabilities becomes significant for their application in various fields. This research study …
capabilities becomes significant for their application in various fields. This research study …
The bigscience roots corpus: A 1.6 tb composite multilingual dataset
As language models grow ever larger, the need for large-scale high-quality text datasets has
never been more pressing, especially in multilingual settings. The BigScience workshop, a 1 …
never been more pressing, especially in multilingual settings. The BigScience workshop, a 1 …
Multilingual denoising pre-training for neural machine translation
This paper demonstrates that multilingual denoising pre-training produces significant
performance gains across a wide variety of machine translation (MT) tasks. We present …
performance gains across a wide variety of machine translation (MT) tasks. We present …
Findings of the 2019 conference on machine translation (WMT19)
This paper presents the results of the premier shared task organized alongside the
Conference on Machine Translation (WMT) 2019. Participants were asked to build machine …
Conference on Machine Translation (WMT) 2019. Participants were asked to build machine …
InfoXLM: An information-theoretic framework for cross-lingual language model pre-training
In this work, we present an information-theoretic framework that formulates cross-lingual
language model pre-training as maximizing mutual information between multilingual-multi …
language model pre-training as maximizing mutual information between multilingual-multi …
Multilingual large language model: A survey of resources, taxonomy and frontiers
Multilingual Large Language Models are capable of using powerful Large Language
Models to handle and respond to queries in multiple languages, which achieves remarkable …
Models to handle and respond to queries in multiple languages, which achieves remarkable …
Accelerating transformer inference for translation via parallel decoding
Autoregressive decoding limits the efficiency of transformers for Machine Translation (MT).
The community proposed specific network architectures and learning-based methods to …
The community proposed specific network architectures and learning-based methods to …
Findings of the 2021 conference on machine translation (WMT21)
F Akhbardeh, A Arkhangorodsky, M Biesialska… - Proceedings of the sixth …, 2021 - cris.fbk.eu
This paper presents the results of the news translation task, the multilingual low-resource
translation for Indo-European languages, the triangular translation task, and the automatic …
translation for Indo-European languages, the triangular translation task, and the automatic …
XLM-E: Cross-lingual language model pre-training via ELECTRA
In this paper, we introduce ELECTRA-style tasks to cross-lingual language model pre-
training. Specifically, we present two pre-training tasks, namely multilingual replaced token …
training. Specifically, we present two pre-training tasks, namely multilingual replaced token …