Advances in adversarial attacks and defenses in computer vision: A survey

N Akhtar, A Mian, N Kardan, M Shah - IEEE Access, 2021 - ieeexplore.ieee.org
Deep Learning is the most widely used tool in the contemporary field of computer vision. Its
ability to accurately solve complex problems is employed in vision research to learn deep …

Adversarial attacks and defenses in deep learning: From a perspective of cybersecurity

S Zhou, C Liu, D Ye, T Zhu, W Zhou, PS Yu - ACM Computing Surveys, 2022 - dl.acm.org
The outstanding performance of deep neural networks has promoted deep learning
applications in a broad set of domains. However, the potential risks caused by adversarial …

Threat of adversarial attacks on deep learning in computer vision: A survey

N Akhtar, A Mian - Ieee Access, 2018 - ieeexplore.ieee.org
Deep learning is at the heart of the current rise of artificial intelligence. In the field of
computer vision, it has become the workhorse for applications ranging from self-driving cars …

A pilot study of query-free adversarial attack against stable diffusion

H Zhuang, Y Zhang, S Liu - … of the IEEE/CVF Conference on …, 2023 - openaccess.thecvf.com
Despite the record-breaking performance in Text-to-Image (T2I) generation by Stable
Diffusion, less research attention is paid to its adversarial robustness. In this work, we study …

A primer on zeroth-order optimization in signal processing and machine learning: Principals, recent advances, and applications

S Liu, PY Chen, B Kailkhura, G Zhang… - IEEE Signal …, 2020 - ieeexplore.ieee.org
Zeroth-order (ZO) optimization is a subset of gradient-free optimization that emerges in many
signal processing and machine learning (ML) applications. It is used for solving optimization …

Improving the transferability of targeted adversarial examples through object-based diverse input

J Byun, S Cho, MJ Kwon, HS Kim… - Proceedings of the …, 2022 - openaccess.thecvf.com
The transferability of adversarial examples allows the deception on black-box models, and
transfer-based targeted attacks have attracted a lot of interest due to their practical …

Boosting the transferability of adversarial attacks with reverse adversarial perturbation

Z Qin, Y Fan, Y Liu, L Shen, Y Zhang… - Advances in neural …, 2022 - proceedings.neurips.cc
Deep neural networks (DNNs) have been shown to be vulnerable to adversarial examples,
which can produce erroneous predictions by injecting imperceptible perturbations. In this …

Rays: A ray searching method for hard-label adversarial attack

J Chen, Q Gu - Proceedings of the 26th ACM SIGKDD International …, 2020 - dl.acm.org
Deep neural networks are vulnerable to adversarial attacks. Among different attack settings,
the most challenging yet the most practical one is the hard-label setting where the attacker …

A survey of attacks on large vision-language models: Resources, advances, and future trends

D Liu, M Yang, X Qu, P Zhou, Y Cheng… - arxiv preprint arxiv …, 2024 - arxiv.org
With the significant development of large models in recent years, Large Vision-Language
Models (LVLMs) have demonstrated remarkable capabilities across a wide range of …

Transfer learning without knowing: Reprogramming black-box machine learning models with scarce data and limited resources

YY Tsai, PY Chen, TY Ho - International Conference on …, 2020 - proceedings.mlr.press
Current transfer learning methods are mainly based on finetuning a pretrained model with
target-domain data. Motivated by the techniques from adversarial machine learning (ML) …