Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Attention mechanism in neural networks: where it comes and where it goes
D Soydaner - Neural Computing and Applications, 2022 - Springer
A long time ago in the machine learning literature, the idea of incorporating a mechanism
inspired by the human visual system into neural networks was introduced. This idea is …
inspired by the human visual system into neural networks was introduced. This idea is …
A survey on label-efficient deep image segmentation: Bridging the gap between weak supervision and dense prediction
The rapid development of deep learning has made a great progress in image segmentation,
one of the fundamental tasks of computer vision. However, the current segmentation …
one of the fundamental tasks of computer vision. However, the current segmentation …
Vision transformers for single image dehazing
Image dehazing is a representative low-level vision task that estimates latent haze-free
images from hazy images. In recent years, convolutional neural network-based methods …
images from hazy images. In recent years, convolutional neural network-based methods …
Davit: Dual attention vision transformers
In this work, we introduce Dual Attention Vision Transformers (DaViT), a simple yet effective
vision transformer architecture that is able to capture global context while maintaining …
vision transformer architecture that is able to capture global context while maintaining …
Dilateformer: Multi-scale dilated transformer for visual recognition
As a de facto solution, the vanilla Vision Transformers (ViTs) are encouraged to model long-
range dependencies between arbitrary image patches while the global attended receptive …
range dependencies between arbitrary image patches while the global attended receptive …
Mpvit: Multi-path vision transformer for dense prediction
Dense computer vision tasks such as object detection and segmentation require effective
multi-scale feature representation for detecting or classifying objects or regions with varying …
multi-scale feature representation for detecting or classifying objects or regions with varying …
N-gram in swin transformers for efficient lightweight image super-resolution
While some studies have proven that Swin Transformer (Swin) with window self-attention
(WSA) is suitable for single image super-resolution (SR), the plain WSA ignores the broad …
(WSA) is suitable for single image super-resolution (SR), the plain WSA ignores the broad …
Multi-scale high-resolution vision transformer for semantic segmentation
Abstract Vision Transformers (ViTs) have emerged with superior performance on computer
vision tasks compared to convolutional neural network (CNN)-based models. However, ViTs …
vision tasks compared to convolutional neural network (CNN)-based models. However, ViTs …
Spvit: Enabling faster vision transformers via latency-aware soft token pruning
Abstract Recently, Vision Transformer (ViT) has continuously established new milestones in
the computer vision field, while the high computation and memory cost makes its …
the computer vision field, while the high computation and memory cost makes its …
Accurate image restoration with attention retractable transformer
Recently, Transformer-based image restoration networks have achieved promising
improvements over convolutional neural networks due to parameter-independent global …
improvements over convolutional neural networks due to parameter-independent global …