Stability analysis and generalization bounds of adversarial training

J **ao, Y Fan, R Sun, J Wang… - Advances in Neural …, 2022 - proceedings.neurips.cc
In adversarial machine learning, deep neural networks can fit the adversarial examples on
the training dataset but have poor generalization ability on the test set. This phenomenon is …

Understanding adversarial robustness against on-manifold adversarial examples

J **ao, L Yang, Y Fan, J Wang, ZQ Luo - Pattern Recognition, 2025 - Elsevier
Deep neural networks (DNNs) are shown to be vulnerable to adversarial examples. A well-
trained model can be easily attacked by adding small perturbations to the original data. One …

Uniformly stable algorithms for adversarial training and beyond

J **ao, J Zhang, ZQ Luo, A Ozdaglar - arxiv preprint arxiv:2405.01817, 2024 - arxiv.org
In adversarial machine learning, neural networks suffer from a significant issue known as
robust overfitting, where the robust test accuracy decreases over epochs (Rice et al., 2020) …

Bridging the gap: Rademacher complexity in robust and standard generalization

J **ao, Q Long, W Su - The Thirty Seventh Annual …, 2024 - proceedings.mlr.press
Abstract Training Deep Neural Networks (DNNs) with adversarial examples often results in
poor generalization to test-time adversarial data. This paper investigates this issue, known …

A closer look at curriculum adversarial training: from an online perspective

L Shi, W Liu - Proceedings of the AAAI Conference on Artificial …, 2024 - ojs.aaai.org
Curriculum adversarial training empirically finds that gradually increasing the hardness of
adversarial examples can further improve the adversarial robustness of the trained model …

Stability and generalization in free adversarial training

X Cheng, K Fu, F Farnia - arxiv preprint arxiv:2404.08980, 2024 - arxiv.org
While adversarial training methods have significantly improved the robustness of deep
neural networks against norm-bounded adversarial perturbations, the generalization gap …

Enhancing adversarial robustness for deep metric learning via neural discrete adversarial training

C Li, Z Zhu, R Niu, Y Zhao - Computers & Security, 2024 - Elsevier
Due to the security concerns arising from adversarial vulnerability in deep metric learning
models, it is essential to enhance their adversarial robustness for secure neural network …

RAMP: Boosting Adversarial Robustness Against Multiple Perturbations for Universal Robustness

E Jiang, G Singh - Advances in Neural Information …, 2025 - proceedings.neurips.cc
Most existing works focus on improving robustness against adversarial attacks bounded by
a single $ l_p $ norm using adversarial training (AT). However, these AT models' multiple …

Towards Universal Certified Robustness with Multi-Norm Training

E Jiang, G Singh - arxiv preprint arxiv:2410.03000, 2024 - arxiv.org
Existing certified training methods can only train models to be robust against a certain
perturbation type (eg $ l_\infty $ or $ l_2 $). However, an $ l_\infty $ certifiably robust model …

Improving adversarial training for multiple perturbations through the lens of uniform stability

J **ao, Z Qin, Y Fan, B Wu, J Wang… - The Second Workshop …, 2023 - openreview.net
In adversarial training (AT), most existing works focus on AT with a single type of
perturbation, such as the $\ell_\infty $ attacks. However, deep neural networks (DNNs) are …