Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
A survey of adversarial defenses and robustness in nlp
In the past few years, it has become increasingly evident that deep neural networks are not
resilient enough to withstand adversarial perturbations in input data, leaving them …
resilient enough to withstand adversarial perturbations in input data, leaving them …
Understanding robustness of transformers for image classification
Abstract Deep Convolutional Neural Networks (CNNs) have long been the architecture of
choice for computer vision tasks. Recently, Transformer-based architectures like Vision …
choice for computer vision tasks. Recently, Transformer-based architectures like Vision …
On the robustness of vision transformers to adversarial examples
Recent advances in attention-based networks have shown that Vision Transformers can
achieve state-of-the-art or near state-of-the-art results on many image classification tasks …
achieve state-of-the-art or near state-of-the-art results on many image classification tasks …
On the adversarial robustness of vision transformers
Following the success in advancing natural language processing and understanding,
transformers are expected to bring revolutionary changes to computer vision. This work …
transformers are expected to bring revolutionary changes to computer vision. This work …
Context-free word importance scores for attacking neural networks
N Shakeel, S Shakeel - Journal of Computational and …, 2022 - ojs.bonviewpress.com
Leave-One-Out (LOO) scores provide estimates of feature importance in neural networks, for
adversarial attacks. In this work, we present context-free word scores as a query-efficient …
adversarial attacks. In this work, we present context-free word scores as a query-efficient …
Theoretical limitations of self-attention in neural sequence models
M Hahn - Transactions of the Association for Computational …, 2020 - direct.mit.edu
Transformers are emerging as the new workhorse of NLP, showing great success across
tasks. Unlike LSTMs, transformers process input sequences entirely through self-attention …
tasks. Unlike LSTMs, transformers process input sequences entirely through self-attention …
Adversarial training for large neural language models
Generalization and robustness are both key desiderata for designing machine learning
methods. Adversarial training can enhance robustness, but past work often finds it hurts …
methods. Adversarial training can enhance robustness, but past work often finds it hurts …
Vision transformers in domain adaptation and domain generalization: a study of robustness
Deep learning models are often evaluated in scenarios where the data distribution is
different from those used in the training and validation phases. The discrepancy presents a …
different from those used in the training and validation phases. The discrepancy presents a …
Codeattack: Code-based adversarial attacks for pre-trained programming language models
Pre-trained programming language (PL) models (such as CodeT5, CodeBERT,
GraphCodeBERT, etc.,) have the potential to automate software engineering tasks involving …
GraphCodeBERT, etc.,) have the potential to automate software engineering tasks involving …
Adversarial robustness comparison of vision transformer and mlp-mixer to cnns
Convolutional Neural Networks (CNNs) have become the de facto gold standard in
computer vision applications in the past years. Recently, however, new model architectures …
computer vision applications in the past years. Recently, however, new model architectures …