A comprehensive survey on poisoning attacks and countermeasures in machine learning

Z Tian, L Cui, J Liang, S Yu - ACM Computing Surveys, 2022 - dl.acm.org
The prosperity of machine learning has been accompanied by increasing attacks on the
training process. Among them, poisoning attacks have become an emerging threat during …

Trustworthy AI: From principles to practices

B Li, P Qi, B Liu, S Di, J Liu, J Pei, J Yi… - ACM Computing Surveys, 2023 - dl.acm.org
The rapid development of Artificial Intelligence (AI) technology has enabled the deployment
of various systems based on it. However, many current AI systems are found vulnerable to …

Ensemble adversarial training: Attacks and defenses

F Tramèr, A Kurakin, N Papernot, I Goodfellow… - arxiv preprint arxiv …, 2017 - arxiv.org
Adversarial examples are perturbed inputs designed to fool machine learning models.
Adversarial training injects such examples into training data to increase robustness. To …

Threat of adversarial attacks on deep learning in computer vision: A survey

N Akhtar, A Mian - Ieee Access, 2018 - ieeexplore.ieee.org
Deep learning is at the heart of the current rise of artificial intelligence. In the field of
computer vision, it has become the workhorse for applications ranging from self-driving cars …

On evaluating adversarial robustness of large vision-language models

Y Zhao, T Pang, C Du, X Yang, C Li… - Advances in …, 2024 - proceedings.neurips.cc
Large vision-language models (VLMs) such as GPT-4 have achieved unprecedented
performance in response generation, especially with visual inputs, enabling more creative …

Enhancing the transferability of adversarial attacks through variance tuning

X Wang, K He - Proceedings of the IEEE/CVF conference on …, 2021 - openaccess.thecvf.com
Deep neural networks are vulnerable to adversarial examples that mislead the models with
imperceptible perturbations. Though adversarial attacks have achieved incredible success …

Feature importance-aware transferable adversarial attacks

Z Wang, H Guo, Z Zhang, W Liu… - Proceedings of the …, 2021 - openaccess.thecvf.com
Transferability of adversarial examples is of central importance for attacking an unknown
model, which facilitates adversarial attacks in more practical scenarios, eg, blackbox attacks …

Nesterov accelerated gradient and scale invariance for adversarial attacks

J Lin, C Song, K He, L Wang, JE Hopcroft - arxiv preprint arxiv …, 2019 - arxiv.org
Deep learning models are vulnerable to adversarial examples crafted by applying human-
imperceptible perturbations on benign inputs. However, under the black-box setting, most …

Advances in adversarial attacks and defenses in computer vision: A survey

N Akhtar, A Mian, N Kardan, M Shah - IEEE Access, 2021 - ieeexplore.ieee.org
Deep Learning is the most widely used tool in the contemporary field of computer vision. Its
ability to accurately solve complex problems is employed in vision research to learn deep …

Frequency domain model augmentation for adversarial attack

Y Long, Q Zhang, B Zeng, L Gao, X Liu, J Zhang… - European conference on …, 2022 - Springer
For black-box attacks, the gap between the substitute model and the victim model is usually
large, which manifests as a weak attack performance. Motivated by the observation that the …