Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
A comprehensive survey on source-free domain adaptation
Over the past decade, domain adaptation has become a widely studied branch of transfer
learning which aims to improve performance on target domains by leveraging knowledge …
learning which aims to improve performance on target domains by leveraging knowledge …
Sparsity in transformers: A systematic literature review
Transformers have become the state-of-the-art architectures for various tasks in Natural
Language Processing (NLP) and Computer Vision (CV); however, their space and …
Language Processing (NLP) and Computer Vision (CV); however, their space and …
Vmamba: Visual state space model
Designing computationally efficient network architectures remains an ongoing necessity in
computer vision. In this paper, we adapt Mamba, a state-space language model, into …
computer vision. In this paper, we adapt Mamba, a state-space language model, into …
Flatten transformer: Vision transformer using focused linear attention
The quadratic computation complexity of self-attention has been a persistent challenge
when applying Transformer models to vision tasks. Linear attention, on the other hand, offers …
when applying Transformer models to vision tasks. Linear attention, on the other hand, offers …
Oneformer: One transformer to rule universal image segmentation
Abstract Universal Image Segmentation is not a new concept. Past attempts to unify image
segmentation include scene parsing, panoptic segmentation, and, more recently, new …
segmentation include scene parsing, panoptic segmentation, and, more recently, new …
Agent attention: On the integration of softmax and linear attention
The attention module is the key component in Transformers. While the global attention
mechanism offers high expressiveness, its excessive computational cost restricts its …
mechanism offers high expressiveness, its excessive computational cost restricts its …
Vit-comer: Vision transformer with convolutional multi-scale feature interaction for dense predictions
Abstract Although Vision Transformer (ViT) has achieved significant success in computer
vision it does not perform well in dense prediction tasks due to the lack of inner-patch …
vision it does not perform well in dense prediction tasks due to the lack of inner-patch …
Metaformer baselines for vision
MetaFormer, the abstracted architecture of Transformer, has been found to play a significant
role in achieving competitive performance. In this paper, we further explore the capacity of …
role in achieving competitive performance. In this paper, we further explore the capacity of …
Dilateformer: Multi-scale dilated transformer for visual recognition
As a de facto solution, the vanilla Vision Transformers (ViTs) are encouraged to model long-
range dependencies between arbitrary image patches while the global attended receptive …
range dependencies between arbitrary image patches while the global attended receptive …
Rmt: Retentive networks meet vision transformers
Abstract Vision Transformer (ViT) has gained increasing attention in the computer vision
community in recent years. However the core component of ViT Self-Attention lacks explicit …
community in recent years. However the core component of ViT Self-Attention lacks explicit …