Enhancing adversarial example transferability with an intermediate level attack

Q Huang, I Katsman, H He, Z Gu… - Proceedings of the …, 2019 - openaccess.thecvf.com
Neural networks are vulnerable to adversarial examples, malicious inputs crafted to fool
trained models. Adversarial examples often exhibit black-box transfer, meaning that …

Towards transferable adversarial attack against deep face recognition

Y Zhong, W Deng - IEEE Transactions on Information Forensics …, 2020 - ieeexplore.ieee.org
Face recognition has achieved great success in the last five years due to the development of
deep learning methods. However, deep convolutional neural networks (DCNNs) have been …

Advfaces: Adversarial face synthesis

D Deb, J Zhang, AK Jain - 2020 IEEE International Joint …, 2020 - ieeexplore.ieee.org
Face recognition systems have been shown to be vulnerable to adversarial faces resulting
from adding small perturbations to probe images. Such adversarial images can lead state-of …

Towards face encryption by generating adversarial identity masks

X Yang, Y Dong, T Pang, H Su, J Zhu… - Proceedings of the …, 2021 - openaccess.thecvf.com
As billions of personal data being shared through social media and network, the data
privacy and security have drawn an increasing attention. Several attempts have been made …

Backdooring convolutional neural networks via targeted weight perturbations

J Dumford, W Scheirer - 2020 IEEE International Joint …, 2020 - ieeexplore.ieee.org
We present a new white-box backdoor attack that exploits a vulnerability of convolutional
neural networks (CNNs). In particular, we examine the application of facial recognition …

Perturbing across the feature hierarchy to improve standard and strict blackbox attack transferability

N Inkawhich, K Liang, B Wang… - Advances in …, 2020 - proceedings.neurips.cc
We consider the blackbox transfer-based targeted adversarial attack threat model in the
realm of deep neural network (DNN) image classifiers. Rather than focusing on crossing …

Detecting and mitigating adversarial perturbations for robust face recognition

G Goswami, A Agarwal, N Ratha, R Singh… - International Journal of …, 2019 - Springer
Deep neural network (DNN) architecture based models have high expressive power and
learning capacity. However, they are essentially a black box method since it is not easy to …

A little robustness goes a long way: Leveraging robust features for targeted transfer attacks

J Springer, M Mitchell… - Advances in Neural …, 2021 - proceedings.neurips.cc
Adversarial examples for neural network image classifiers are known to be transferable:
examples optimized to be misclassified by a source classifier are often misclassified as well …

Adversarial learning with margin-based triplet embedding regularization

Y Zhong, W Deng - Proceedings of the IEEE/CVF …, 2019 - openaccess.thecvf.com
Abstract The Deep neural networks (DNNs) have achieved great success on a variety of
computer vision tasks, however, they are highly vulnerable to adversarial attacks. To …

Outsmarting Biometric Imposters: Enhancing Iris-Recognition System Security through Physical Adversarial Example Generation and PAD Fine-Tuning

Y Ogino, K Kakizaki, T Toizumi… - Proceedings of the IEEE …, 2024 - openaccess.thecvf.com
In this paper we address the vulnerabilities of iris recognition systems to both image-based
impersonation attacks and Presentation Attacks (PAs) in physical environments. While …