Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Vision transformers for dense prediction: A survey
S Zuo, Y **ao, X Chang, X Wang - Knowledge-based systems, 2022 - Elsevier
Transformers have demonstrated impressive expressiveness and transfer capability in
computer vision fields. Dense prediction is a fundamental problem in computer vision that is …
computer vision fields. Dense prediction is a fundamental problem in computer vision that is …
Biformer: Vision transformer with bi-level routing attention
As the core building block of vision transformers, attention is a powerful tool to capture long-
range dependency. However, such power comes at a cost: it incurs a huge computation …
range dependency. However, such power comes at a cost: it incurs a huge computation …
Vision transformer with deformable attention
Transformers have recently shown superior performances on various vision tasks. The large,
sometimes even global, receptive field endows Transformer models with higher …
sometimes even global, receptive field endows Transformer models with higher …
A survey of the vision transformers and their CNN-transformer based variants
Vision transformers have become popular as a possible substitute to convolutional neural
networks (CNNs) for a variety of computer vision applications. These transformers, with their …
networks (CNNs) for a variety of computer vision applications. These transformers, with their …
Dynamic neural network structure: A review for its theories and applications
The dynamic neural network (DNN), in contrast to the static counterpart, offers numerous
advantages, such as improved accuracy, efficiency, and interpretability. These benefits stem …
advantages, such as improved accuracy, efficiency, and interpretability. These benefits stem …
EAPT: efficient attention pyramid transformer for image processing
Recent transformer-based models, especially patch-based methods, have shown huge
potentiality in vision tasks. However, the split fixed-size patches divide the input features into …
potentiality in vision tasks. However, the split fixed-size patches divide the input features into …
Vision transformer with quadrangle attention
Window-based attention has become a popular choice in vision transformers due to its
superior performance, lower computational complexity, and less memory footprint. However …
superior performance, lower computational complexity, and less memory footprint. However …
Bending reality: Distortion-aware transformers for adapting to panoramic semantic segmentation
Panoramic images with their 360deg directional view encompass exhaustive information
about the surrounding space, providing a rich foundation for scene understanding. To unfold …
about the surrounding space, providing a rich foundation for scene understanding. To unfold …
Towards practical certifiable patch defense with vision transformer
Patch attacks, one of the most threatening forms of physical attack in adversarial examples,
can lead networks to induce misclassification by modifying pixels arbitrarily in a continuous …
can lead networks to induce misclassification by modifying pixels arbitrarily in a continuous …
Learning graph neural networks for image style transfer
State-of-the-art parametric and non-parametric style transfer approaches are prone to either
distorted local style patterns due to global statistics alignment, or unpleasing artifacts …
distorted local style patterns due to global statistics alignment, or unpleasing artifacts …