Advances in adversarial attacks and defenses in computer vision: A survey

N Akhtar, A Mian, N Kardan, M Shah - IEEE Access, 2021 - ieeexplore.ieee.org
Deep Learning is the most widely used tool in the contemporary field of computer vision. Its
ability to accurately solve complex problems is employed in vision research to learn deep …

Adversarial attacks and defenses in deep learning: From a perspective of cybersecurity

S Zhou, C Liu, D Ye, T Zhu, W Zhou, PS Yu - ACM Computing Surveys, 2022 - dl.acm.org
The outstanding performance of deep neural networks has promoted deep learning
applications in a broad set of domains. However, the potential risks caused by adversarial …

To generate or not? safety-driven unlearned diffusion models are still easy to generate unsafe images... for now

Y Zhang, J Jia, X Chen, A Chen, Y Zhang, J Liu… - … on Computer Vision, 2024 - Springer
The recent advances in diffusion models (DMs) have revolutionized the generation of
realistic and complex images. However, these models also introduce potential safety …

A survey on safety-critical driving scenario generation—a methodological perspective

W Ding, C Xu, M Arief, H Lin, B Li… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Autonomous driving systems have witnessed significant development during the past years
thanks to the advance in machine learning-enabled sensing and decision-making …

A pilot study of query-free adversarial attack against stable diffusion

H Zhuang, Y Zhang, S Liu - … of the IEEE/CVF Conference on …, 2023 - openaccess.thecvf.com
Despite the record-breaking performance in Text-to-Image (T2I) generation by Stable
Diffusion, less research attention is paid to its adversarial robustness. In this work, we study …

Defensive unlearning with adversarial training for robust concept erasure in diffusion models

Y Zhang, X Chen, J Jia, Y Zhang… - Advances in …, 2025 - proceedings.neurips.cc
Diffusion models (DMs) have achieved remarkable success in text-to-image generation, but
they also pose safety risks, such as the potential generation of harmful content and copyright …

Recent advances in trustworthy explainable artificial intelligence: Status, challenges, and perspectives

A Rawal, J McCoy, DB Rawat… - IEEE Transactions …, 2021 - ieeexplore.ieee.org
Artificial intelligence (AI) and machine learning (ML) have come a long way from the earlier
days of conceptual theories, to being an integral part of today's technological society. Rapid …

Topology attack and defense for graph neural networks: An optimization perspective

K Xu, H Chen, S Liu, PY Chen, TW Weng… - arxiv preprint arxiv …, 2019 - arxiv.org
Graph neural networks (GNNs) which apply the deep neural networks to graph data have
achieved significant performance for the task of semi-supervised node classification …

Adversarial t-shirt! evading person detectors in a physical world

K Xu, G Zhang, S Liu, Q Fan, M Sun, H Chen… - Computer vision–ECCV …, 2020 - Springer
It is known that deep neural networks (DNNs) are vulnerable to adversarial attacks. The so-
called physical adversarial examples deceive DNN-based decision makers by attaching …

Threat of adversarial attacks on deep learning in computer vision: A survey

N Akhtar, A Mian - Ieee Access, 2018 - ieeexplore.ieee.org
Deep learning is at the heart of the current rise of artificial intelligence. In the field of
computer vision, it has become the workhorse for applications ranging from self-driving cars …