Introducing competition to boost the transferability of targeted adversarial examples through clean feature mixup

J Byun, MJ Kwon, S Cho, Y Kim… - Proceedings of the …, 2023 - openaccess.thecvf.com
Deep neural networks are widely known to be susceptible to adversarial examples, which
can cause incorrect predictions through subtle input modifications. These adversarial …

Adversarial ranking attack and defense

M Zhou, Z Niu, L Wang, Q Zhang, G Hua - Computer Vision–ECCV 2020 …, 2020 - Springer
Abstract Deep Neural Network (DNN) classifiers are vulnerable to adversarial attack, where
an imperceptible perturbation could result in misclassification. However, the vulnerability of …

Robust design of deep neural networks against adversarial attacks based on lyapunov theory

A Rahnama, AT Nguyen, E Raff - Proceedings of the IEEE …, 2020 - openaccess.thecvf.com
Deep neural networks (DNNs) are vulnerable to subtle adversarial perturbations applied to
the input. These adversarial perturbations, though imperceptible, can easily mislead the …

Adversarial examples for edge detection: They exist, and they transfer

C Cosgrove, A Yuille - Proceedings of the IEEE/CVF Winter …, 2020 - openaccess.thecvf.com
Convolutional neural networks have recently advanced the state of the art in many tasks
including edge and object boundary detection. However, in this paper, we demonstrate that …

System and method for training a neural network system

AR MOGHADDAM - US Patent 11,164,085, 2021 - Google Patents
(57) ABSTRACT A computer-implemented method for training a neural net work system. The
method includes receiving at least a first data vector at a first layer of the neural network …