Advances in adversarial attacks and defenses in computer vision: A survey

N Akhtar, A Mian, N Kardan, M Shah - IEEE Access, 2021 - ieeexplore.ieee.org
Deep Learning is the most widely used tool in the contemporary field of computer vision. Its
ability to accurately solve complex problems is employed in vision research to learn deep …

Interpreting adversarial examples in deep learning: A review

S Han, C Lin, C Shen, Q Wang, X Guan - ACM Computing Surveys, 2023 - dl.acm.org
Deep learning technology is increasingly being applied in safety-critical scenarios but has
recently been found to be susceptible to imperceptible adversarial perturbations. This raises …

Frequency domain model augmentation for adversarial attack

Y Long, Q Zhang, B Zeng, L Gao, X Liu, J Zhang… - European conference on …, 2022 - Springer
For black-box attacks, the gap between the substitute model and the victim model is usually
large, which manifests as a weak attack performance. Motivated by the observation that the …

Reflection backdoor: A natural backdoor attack on deep neural networks

Y Liu, X Ma, J Bailey, F Lu - Computer Vision–ECCV 2020: 16th European …, 2020 - Springer
Recent studies have shown that DNNs can be compromised by backdoor attacks crafted at
training time. A backdoor attack installs a backdoor into the victim model by injecting a …

Neural attention distillation: Erasing backdoor triggers from deep neural networks

Y Li, X Lyu, N Koren, L Lyu, B Li, X Ma - arxiv preprint arxiv:2101.05930, 2021 - arxiv.org
Deep neural networks (DNNs) are known vulnerable to backdoor attacks, a training time
attack that injects a trigger pattern into a small proportion of training data so as to control the …

Adversarial weight perturbation helps robust generalization

D Wu, ST **a, Y Wang - Advances in neural information …, 2020 - proceedings.neurips.cc
The study on improving the robustness of deep neural networks against adversarial
examples grows rapidly in recent years. Among them, adversarial training is the most …

Improving adversarial robustness requires revisiting misclassified examples

Y Wang, D Zou, J Yi, J Bailey, X Ma… - … conference on learning …, 2019 - openreview.net
Deep neural networks (DNNs) are vulnerable to adversarial examples crafted by
imperceptible perturbations. A range of defense techniques have been proposed to improve …

Naturalistic physical adversarial patch for object detectors

YCT Hu, BH Kung, DS Tan, JC Chen… - Proceedings of the …, 2021 - openaccess.thecvf.com
Most prior works on physical adversarial attacks mainly focus on the attack performance but
seldom enforce any restrictions over the appearance of the generated adversarial patches …

Admix: Enhancing the transferability of adversarial attacks

X Wang, X He, J Wang, K He - Proceedings of the IEEE/CVF …, 2021 - openaccess.thecvf.com
Deep neural networks are known to be extremely vulnerable to adversarial examples under
white-box setting. Moreover, the malicious adversaries crafted on the surrogate (source) …

Understanding adversarial attacks on deep learning based medical image analysis systems

X Ma, Y Niu, L Gu, Y Wang, Y Zhao, J Bailey, F Lu - Pattern Recognition, 2021 - Elsevier
Deep neural networks (DNNs) have become popular for medical image analysis tasks like
cancer diagnosis and lesion detection. However, a recent study demonstrates that medical …