How deep learning sees the world: A survey on adversarial attacks & defenses

JC Costa, T Roxo, H Proença, PRM Inácio - IEEE Access, 2024‏ - ieeexplore.ieee.org
Deep Learning is currently used to perform multiple tasks, such as object recognition, face
recognition, and natural language processing. However, Deep Neural Networks (DNNs) are …

Better diffusion models further improve adversarial training

Z Wang, T Pang, C Du, M Lin… - … on machine learning, 2023‏ - proceedings.mlr.press
It has been recognized that the data generated by the denoising diffusion probabilistic
model (DDPM) improves adversarial training. After two years of rapid development in …

On evaluating adversarial robustness of large vision-language models

Y Zhao, T Pang, C Du, X Yang, C Li… - Advances in …, 2023‏ - proceedings.neurips.cc
Large vision-language models (VLMs) such as GPT-4 have achieved unprecedented
performance in response generation, especially with visual inputs, enabling more creative …

Decoupled kullback-leibler divergence loss

J Cui, Z Tian, Z Zhong, X Qi, B Yu… - Advances in Neural …, 2025‏ - proceedings.neurips.cc
In this paper, we delve deeper into the Kullback–Leibler (KL) Divergence loss and
mathematically prove that it is equivalent to the Decoupled Kullback-Leibler (DKL) …

Robust evaluation of diffusion-based adversarial purification

M Lee, D Kim - Proceedings of the IEEE/CVF International …, 2023‏ - openaccess.thecvf.com
We question the current evaluation practice on diffusion-based purification methods.
Diffusion-based purification methods aim to remove adversarial effects from an input data …

Boosting accuracy and robustness of student models via adaptive adversarial distillation

B Huang, M Chen, Y Wang, J Lu… - Proceedings of the …, 2023‏ - openaccess.thecvf.com
Distilled student models in teacher-student architectures are widely considered for
computational-effective deployment in real-time applications and edge devices. However …

Improving generalization of adversarial training via robust critical fine-tuning

K Zhu, X Hu, J Wang, X **e… - Proceedings of the IEEE …, 2023‏ - openaccess.thecvf.com
Deep neural networks are susceptible to adversarial examples, posing a significant security
risk in critical applications. Adversarial Training (AT) is a well-established technique to …