A survey of adversarial defenses and robustness in nlp

S Goyal, S Doddapaneni, MM Khapra… - ACM Computing …, 2023 - dl.acm.org
In the past few years, it has become increasingly evident that deep neural networks are not
resilient enough to withstand adversarial perturbations in input data, leaving them …

Understanding robustness of transformers for image classification

S Bhojanapalli, A Chakrabarti… - Proceedings of the …, 2021 - openaccess.thecvf.com
Abstract Deep Convolutional Neural Networks (CNNs) have long been the architecture of
choice for computer vision tasks. Recently, Transformer-based architectures like Vision …

On the robustness of vision transformers to adversarial examples

K Mahmood, R Mahmood… - Proceedings of the IEEE …, 2021 - openaccess.thecvf.com
Recent advances in attention-based networks have shown that Vision Transformers can
achieve state-of-the-art or near state-of-the-art results on many image classification tasks …

On the adversarial robustness of vision transformers

R Shao, Z Shi, J Yi, PY Chen, CJ Hsieh - arxiv preprint arxiv:2103.15670, 2021 - arxiv.org
Following the success in advancing natural language processing and understanding,
transformers are expected to bring revolutionary changes to computer vision. This work …

Context-free word importance scores for attacking neural networks

N Shakeel, S Shakeel - Journal of Computational and …, 2022 - ojs.bonviewpress.com
Leave-One-Out (LOO) scores provide estimates of feature importance in neural networks, for
adversarial attacks. In this work, we present context-free word scores as a query-efficient …

Theoretical limitations of self-attention in neural sequence models

M Hahn - Transactions of the Association for Computational …, 2020 - direct.mit.edu
Transformers are emerging as the new workhorse of NLP, showing great success across
tasks. Unlike LSTMs, transformers process input sequences entirely through self-attention …

Adversarial training for large neural language models

X Liu, H Cheng, P He, W Chen, Y Wang… - arxiv preprint arxiv …, 2020 - arxiv.org
Generalization and robustness are both key desiderata for designing machine learning
methods. Adversarial training can enhance robustness, but past work often finds it hurts …

Vision transformers in domain adaptation and domain generalization: a study of robustness

S Alijani, J Fayyad, H Najjaran - Neural Computing and Applications, 2024 - Springer
Deep learning models are often evaluated in scenarios where the data distribution is
different from those used in the training and validation phases. The discrepancy presents a …

Codeattack: Code-based adversarial attacks for pre-trained programming language models

A Jha, CK Reddy - Proceedings of the AAAI Conference on Artificial …, 2023 - ojs.aaai.org
Pre-trained programming language (PL) models (such as CodeT5, CodeBERT,
GraphCodeBERT, etc.,) have the potential to automate software engineering tasks involving …

Adversarial robustness comparison of vision transformer and mlp-mixer to cnns

P Benz, S Ham, C Zhang, A Karjauv… - arxiv preprint arxiv …, 2021 - arxiv.org
Convolutional Neural Networks (CNNs) have become the de facto gold standard in
computer vision applications in the past years. Recently, however, new model architectures …