Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Enhancing adversarial example transferability with an intermediate level attack
Neural networks are vulnerable to adversarial examples, malicious inputs crafted to fool
trained models. Adversarial examples often exhibit black-box transfer, meaning that …
trained models. Adversarial examples often exhibit black-box transfer, meaning that …
Towards transferable adversarial attack against deep face recognition
Face recognition has achieved great success in the last five years due to the development of
deep learning methods. However, deep convolutional neural networks (DCNNs) have been …
deep learning methods. However, deep convolutional neural networks (DCNNs) have been …
Advfaces: Adversarial face synthesis
Face recognition systems have been shown to be vulnerable to adversarial faces resulting
from adding small perturbations to probe images. Such adversarial images can lead state-of …
from adding small perturbations to probe images. Such adversarial images can lead state-of …
Towards face encryption by generating adversarial identity masks
As billions of personal data being shared through social media and network, the data
privacy and security have drawn an increasing attention. Several attempts have been made …
privacy and security have drawn an increasing attention. Several attempts have been made …
Backdooring convolutional neural networks via targeted weight perturbations
J Dumford, W Scheirer - 2020 IEEE International Joint …, 2020 - ieeexplore.ieee.org
We present a new white-box backdoor attack that exploits a vulnerability of convolutional
neural networks (CNNs). In particular, we examine the application of facial recognition …
neural networks (CNNs). In particular, we examine the application of facial recognition …
Perturbing across the feature hierarchy to improve standard and strict blackbox attack transferability
We consider the blackbox transfer-based targeted adversarial attack threat model in the
realm of deep neural network (DNN) image classifiers. Rather than focusing on crossing …
realm of deep neural network (DNN) image classifiers. Rather than focusing on crossing …
Detecting and mitigating adversarial perturbations for robust face recognition
Deep neural network (DNN) architecture based models have high expressive power and
learning capacity. However, they are essentially a black box method since it is not easy to …
learning capacity. However, they are essentially a black box method since it is not easy to …
A little robustness goes a long way: Leveraging robust features for targeted transfer attacks
J Springer, M Mitchell… - Advances in Neural …, 2021 - proceedings.neurips.cc
Adversarial examples for neural network image classifiers are known to be transferable:
examples optimized to be misclassified by a source classifier are often misclassified as well …
examples optimized to be misclassified by a source classifier are often misclassified as well …
Adversarial learning with margin-based triplet embedding regularization
Abstract The Deep neural networks (DNNs) have achieved great success on a variety of
computer vision tasks, however, they are highly vulnerable to adversarial attacks. To …
computer vision tasks, however, they are highly vulnerable to adversarial attacks. To …
Outsmarting Biometric Imposters: Enhancing Iris-Recognition System Security through Physical Adversarial Example Generation and PAD Fine-Tuning
In this paper we address the vulnerabilities of iris recognition systems to both image-based
impersonation attacks and Presentation Attacks (PAs) in physical environments. While …
impersonation attacks and Presentation Attacks (PAs) in physical environments. While …