Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Large separable kernel attention: Rethinking the large kernel attention design in cnn
Abstract Visual Attention Networks (VAN) with Large Kernel Attention (LKA) modules have
been shown to provide remarkable performance, that surpasses Vision Transformers (ViTs) …
been shown to provide remarkable performance, that surpasses Vision Transformers (ViTs) …
A convnet for the 2020s
The" Roaring 20s" of visual recognition began with the introduction of Vision Transformers
(ViTs), which quickly superseded ConvNets as the state-of-the-art image classification …
(ViTs), which quickly superseded ConvNets as the state-of-the-art image classification …
Robust mean teacher for continual and gradual test-time adaptation
Since experiencing domain shifts during test-time is inevitable in practice, test-time adaption
(TTA) continues to adapt the model after deployment. Recently, the area of continual and …
(TTA) continues to adapt the model after deployment. Recently, the area of continual and …
Back to the source: Diffusion-driven adaptation to test-time corruption
Test-time adaptation harnesses test inputs to improve the accuracy of a model trained on
source data when tested on shifted target data. Most methods update the source model by …
source data when tested on shifted target data. Most methods update the source model by …
3d common corruptions and data augmentation
We introduce a set of image transformations that can be used as corruptions to evaluate the
robustness of models as well as data augmentation mechanisms for training neural …
robustness of models as well as data augmentation mechanisms for training neural …
Pixmix: Dreamlike pictures comprehensively improve safety measures
In real-world applications of machine learning, reliable and safe systems must consider
measures of performance beyond standard test set accuracy. These other goals include out …
measures of performance beyond standard test set accuracy. These other goals include out …
A closer look at the robustness of contrastive language-image pre-training (clip)
W Tu, W Deng, T Gedeon - Advances in Neural Information …, 2023 - proceedings.neurips.cc
Abstract Contrastive Language-Image Pre-training (CLIP) models have demonstrated
remarkable generalization capabilities across multiple challenging distribution shifts …
remarkable generalization capabilities across multiple challenging distribution shifts …
Wavelet convolutions for large receptive fields
In recent years, there have been attempts to increase the kernel size of Convolutional
Neural Nets (CNNs) to mimic the global receptive field of Vision Transformers'(ViTs) self …
Neural Nets (CNNs) to mimic the global receptive field of Vision Transformers'(ViTs) self …
Benchmarking robustness of 3d point cloud recognition against common corruptions
Deep neural networks on 3D point cloud data have been widely used in the real world,
especially in safety-critical applications. However, their robustness against corruptions is …
especially in safety-critical applications. However, their robustness against corruptions is …
On the effectiveness of adversarial training against common corruptions
The literature on robustness towards common corruptions shows no consensus on whether
adversarial training can improve the performance in this setting. First, we show that, when …
adversarial training can improve the performance in this setting. First, we show that, when …