Advances in adversarial attacks and defenses in computer vision: A survey

N Akhtar, A Mian, N Kardan, M Shah - IEEE Access, 2021 - ieeexplore.ieee.org
Deep Learning is the most widely used tool in the contemporary field of computer vision. Its
ability to accurately solve complex problems is employed in vision research to learn deep …

Gan inversion: A survey

W **a, Y Zhang, Y Yang, JH Xue… - IEEE transactions on …, 2022 - ieeexplore.ieee.org
GAN inversion aims to invert a given image back into the latent space of a pretrained GAN
model so that the image can be faithfully reconstructed from the inverted code by the …

Threat of adversarial attacks on deep learning in computer vision: A survey

N Akhtar, A Mian - Ieee Access, 2018 - ieeexplore.ieee.org
Deep learning is at the heart of the current rise of artificial intelligence. In the field of
computer vision, it has become the workhorse for applications ranging from self-driving cars …

Backdoor learning: A survey

Y Li, Y Jiang, Z Li, ST **a - IEEE Transactions on Neural …, 2022 - ieeexplore.ieee.org
Backdoor attack intends to embed hidden backdoors into deep neural networks (DNNs), so
that the attacked models perform well on benign samples, whereas their predictions will be …

LAS-AT: adversarial training with learnable attack strategy

X Jia, Y Zhang, B Wu, K Ma… - Proceedings of the …, 2022 - openaccess.thecvf.com
Adversarial training (AT) is always formulated as a minimax problem, of which the
performance depends on the inner optimization that involves the generation of adversarial …

Generating transferable 3d adversarial point cloud via random perturbation factorization

B He, J Liu, Y Li, S Liang, J Li, X Jia… - Proceedings of the AAAI …, 2023 - ojs.aaai.org
Recent studies have demonstrated that existing deep neural networks (DNNs) on 3D point
clouds are vulnerable to adversarial examples, especially under the white-box settings …

A comprehensive study on the robustness of deep learning-based image classification and object detection in remote sensing: Surveying and benchmarking

S Mei, J Lian, X Wang, Y Su, M Ma… - Journal of Remote …, 2024 - spj.science.org
Deep neural networks (DNNs) have found widespread applications in interpreting remote
sensing (RS) imagery. However, it has been demonstrated in previous works that DNNs are …

Rethinking the trigger of backdoor attack

Y Li, T Zhai, B Wu, Y Jiang, Z Li, S **a - arxiv preprint arxiv:2004.04692, 2020 - arxiv.org
Backdoor attack intends to inject hidden backdoor into the deep neural networks (DNNs),
such that the prediction of the infected model will be maliciously changed if the hidden …

Bias-based universal adversarial patch attack for automatic check-out

A Liu, J Wang, X Liu, B Cao, C Zhang, H Yu - Computer Vision–ECCV …, 2020 - Springer
Adversarial examples are inputs with imperceptible perturbations that easily misleading
deep neural networks (DNNs). Recently, adversarial patch, with noise confined to a small …

Boosting the transferability of adversarial attacks with reverse adversarial perturbation

Z Qin, Y Fan, Y Liu, L Shen, Y Zhang… - Advances in neural …, 2022 - proceedings.neurips.cc
Deep neural networks (DNNs) have been shown to be vulnerable to adversarial examples,
which can produce erroneous predictions by injecting imperceptible perturbations. In this …