A survey on physical adversarial attack in computer vision

D Wang, W Yao, T Jiang, G Tang, X Chen - arxiv preprint arxiv …, 2022 - arxiv.org
Over the past decade, deep learning has revolutionized conventional tasks that rely on hand-
craft feature extraction with its strong feature learning capability, leading to substantial …

A comprehensive study on the robustness of deep learning-based image classification and object detection in remote sensing: Surveying and benchmarking

S Mei, J Lian, X Wang, Y Su, M Ma… - Journal of Remote …, 2024 - spj.science.org
Deep neural networks (DNNs) have found widespread applications in interpreting remote
sensing (RS) imagery. However, it has been demonstrated in previous works that DNNs are …

Security of target recognition for UAV forestry remote sensing based on multi-source data fusion transformer framework

H Feng, Q Li, W Wang, AK Bashir, AK Singh, J Xu… - Information …, 2024 - Elsevier
Abstract Unmanned Aerial Vehicle (UAV) remote sensing object recognition plays a vital
role in a variety of sectors including military, agriculture, forestry, and construction. Accurate …

Towards effective adversarial textured 3d meshes on physical face recognition

X Yang, C Liu, L Xu, Y Wang, Y Dong… - Proceedings of the …, 2023 - openaccess.thecvf.com
Face recognition is a prevailing authentication solution in numerous biometric applications.
Physical adversarial attacks, as an important surrogate, can identify the weaknesses of face …

Boosting the transferability of adversarial attacks with reverse adversarial perturbation

Z Qin, Y Fan, Y Liu, L Shen, Y Zhang… - Advances in neural …, 2022 - proceedings.neurips.cc
Deep neural networks (DNNs) have been shown to be vulnerable to adversarial examples,
which can produce erroneous predictions by injecting imperceptible perturbations. In this …

CBA: Contextual background attack against optical aerial detection in the physical world

J Lian, X Wang, Y Su, M Ma… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Patch-based physical attacks have increasingly aroused concerns. However, most existing
methods focus on obscuring targets captured on the ground, and some of these methods are …

Boosting transferability of physical attack against detectors by redistributing separable attention

Y Zhang, Z Gong, Y Zhang, K Bin, Y Li, J Qi, H Wen… - Pattern Recognition, 2023 - Elsevier
The research on attack transferability is of great importance as it can guide how to conduct
an adversarial attack without knowing any information about target models. However, it …

Attention‐guided evolutionary attack with elastic‐net regularization on face recognition

C Hu, Y Li, Z Feng, X Wu - Pattern recognition, 2023 - Elsevier
In recent years, face recognition has achieved promising results along with the development
of advanced Deep Neural Networks (DNNs). The existing face recognition systems are …

Understanding adversarial robustness against on-manifold adversarial examples

J **ao, L Yang, Y Fan, J Wang, ZQ Luo - Pattern Recognition, 2025 - Elsevier
Deep neural networks (DNNs) are shown to be vulnerable to adversarial examples. A well-
trained model can be easily attacked by adding small perturbations to the original data. One …

Attacks in adversarial machine learning: A systematic survey from the life-cycle perspective

B Wu, Z Zhu, L Liu, Q Liu, Z He, S Lyu - arxiv preprint arxiv:2302.09457, 2023 - arxiv.org
Adversarial machine learning (AML) studies the adversarial phenomenon of machine
learning, which may make inconsistent or unexpected predictions with humans. Some …