A comprehensive survey on poisoning attacks and countermeasures in machine learning

Z Tian, L Cui, J Liang, S Yu - ACM Computing Surveys, 2022 - dl.acm.org
The prosperity of machine learning has been accompanied by increasing attacks on the
training process. Among them, poisoning attacks have become an emerging threat during …

Advances in adversarial attacks and defenses in computer vision: A survey

N Akhtar, A Mian, N Kardan, M Shah - IEEE Access, 2021 - ieeexplore.ieee.org
Deep Learning is the most widely used tool in the contemporary field of computer vision. Its
ability to accurately solve complex problems is employed in vision research to learn deep …

On evaluating adversarial robustness of large vision-language models

Y Zhao, T Pang, C Du, X Yang, C Li… - Advances in …, 2023 - proceedings.neurips.cc
Large vision-language models (VLMs) such as GPT-4 have achieved unprecedented
performance in response generation, especially with visual inputs, enabling more creative …

Content-based unrestricted adversarial attack

Z Chen, B Li, S Wu, K Jiang, S Ding… - Advances in Neural …, 2023 - proceedings.neurips.cc
Unrestricted adversarial attacks typically manipulate the semantic content of an image (eg,
color or texture) to create adversarial examples that are both effective and photorealistic …

Enhancing the transferability of adversarial attacks through variance tuning

X Wang, K He - Proceedings of the IEEE/CVF conference on …, 2021 - openaccess.thecvf.com
Deep neural networks are vulnerable to adversarial examples that mislead the models with
imperceptible perturbations. Though adversarial attacks have achieved incredible success …

How robust is google's bard to adversarial image attacks?

Y Dong, H Chen, J Chen, Z Fang, X Yang… - arxiv preprint arxiv …, 2023 - arxiv.org
Multimodal Large Language Models (MLLMs) that integrate text and other modalities
(especially vision) have achieved unprecedented performance in various multimodal tasks …

Frequency domain model augmentation for adversarial attack

Y Long, Q Zhang, B Zeng, L Gao, X Liu, J Zhang… - European conference on …, 2022 - Springer
For black-box attacks, the gap between the substitute model and the victim model is usually
large, which manifests as a weak attack performance. Motivated by the observation that the …

Shadows can be dangerous: Stealthy and effective physical-world adversarial attack by natural phenomenon

Y Zhong, X Liu, D Zhai, J Jiang… - Proceedings of the IEEE …, 2022 - openaccess.thecvf.com
Estimating the risk level of adversarial examples is essential for safely deploying machine
learning models in the real world. One popular approach for physical-world attacks is to …

Improving adversarial transferability via neuron attribution-based attacks

J Zhang, W Wu, J Huang, Y Huang… - Proceedings of the …, 2022 - openaccess.thecvf.com
Deep neural networks (DNNs) are known to be vulnerable to adversarial examples. It is thus
imperative to devise effective attack algorithms to identify the deficiencies of DNNs …

Feature importance-aware transferable adversarial attacks

Z Wang, H Guo, Z Zhang, W Liu… - Proceedings of the …, 2021 - openaccess.thecvf.com
Transferability of adversarial examples is of central importance for attacking an unknown
model, which facilitates adversarial attacks in more practical scenarios, eg, blackbox attacks …