Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
[HTML][HTML] A survey of transformers
Transformers have achieved great success in many artificial intelligence fields, such as
natural language processing, computer vision, and audio processing. Therefore, it is natural …
natural language processing, computer vision, and audio processing. Therefore, it is natural …
Transformers in time-series analysis: A tutorial
Transformer architectures have widespread applications, particularly in Natural Language
Processing and Computer Vision. Recently, Transformers have been employed in various …
Processing and Computer Vision. Recently, Transformers have been employed in various …
Transformers in time series: A survey
Transformers have achieved superior performances in many tasks in natural language
processing and computer vision, which also triggered great interest in the time series …
processing and computer vision, which also triggered great interest in the time series …
An empirical study of training end-to-end vision-and-language transformers
Abstract Vision-and-language (VL) pre-training has proven to be highly effective on various
VL downstream tasks. While recent work has shown that fully transformer-based VL models …
VL downstream tasks. While recent work has shown that fully transformer-based VL models …
Autoformer: Searching transformers for visual recognition
Recently, pure transformer-based models have shown great potentials for vision tasks such
as image classification and detection. However, the design of transformer networks is …
as image classification and detection. However, the design of transformer networks is …
Gshard: Scaling giant models with conditional computation and automatic sharding
Neural network scaling has been critical for improving the model quality in many real-world
machine learning applications with vast amounts of training data and compute. Although this …
machine learning applications with vast amounts of training data and compute. Although this …
Deep modular co-attention networks for visual question answering
Abstract Visual Question Answering (VQA) requires a fine-grained and simultaneous
understanding of both the visual content of images and the textual content of questions …
understanding of both the visual content of images and the textual content of questions …
Learning deep transformer models for machine translation
Transformer is the state-of-the-art model in recent machine translation evaluations. Two
strands of research are promising to improve models of this kind: the first uses wide …
strands of research are promising to improve models of this kind: the first uses wide …
Improving massively multilingual neural machine translation and zero-shot translation
Massively multilingual models for neural machine translation (NMT) are theoretically
attractive, but often underperform bilingual models and deliver poor zero-shot translations. In …
attractive, but often underperform bilingual models and deliver poor zero-shot translations. In …
Attention in natural language processing
Attention is an increasingly popular mechanism used in a wide range of neural
architectures. The mechanism itself has been realized in a variety of formats. However …
architectures. The mechanism itself has been realized in a variety of formats. However …