Stability analysis and generalization bounds of adversarial training

J **ao, Y Fan, R Sun, J Wang… - Advances in Neural …, 2022 - proceedings.neurips.cc
In adversarial machine learning, deep neural networks can fit the adversarial examples on
the training dataset but have poor generalization ability on the test set. This phenomenon is …

On the adversarial robustness of out-of-distribution generalization models

X Zou, W Liu - Advances in Neural Information Processing …, 2023 - proceedings.neurips.cc
Abstract Out-of-distribution (OOD) generalization has attracted increasing research attention
in recent years, due to its promising experimental results in real-world applications …

Pac-bayesian spectrally-normalized bounds for adversarially robust generalization

J **ao, R Sun, ZQ Luo - Advances in Neural Information …, 2023 - proceedings.neurips.cc
Deep neural networks (DNNs) are vulnerable to adversarial attacks. It is found empirically
that adversarially robust generalization is crucial in establishing defense algorithms against …

Understanding adversarial robustness against on-manifold adversarial examples

J **ao, L Yang, Y Fan, J Wang, ZQ Luo - Pattern Recognition, 2025 - Elsevier
Deep neural networks (DNNs) are shown to be vulnerable to adversarial examples. A well-
trained model can be easily attacked by adding small perturbations to the original data. One …

Adversarially robust hypothesis transfer learning

Y Wang, R Arora - Forty-first International Conference on Machine …, 2024 - openreview.net
In this work, we explore Hypothesis Transfer Learning (HTL) under adversarial attacks. In
this setting, a learner has access to a training dataset of size $ n $ from an underlying …

Stability and Generalization of Adversarial Training for Shallow Neural Networks with Smooth Activation

K Zhang, Y Wang, R Arora - Advances in Neural …, 2025 - proceedings.neurips.cc
Adversarial training has emerged as a popular approach for training models that are robust
to inference-time adversarial attacks. However, our theoretical understanding of why and …

Uniformly stable algorithms for adversarial training and beyond

J **ao, J Zhang, ZQ Luo, A Ozdaglar - arxiv preprint arxiv:2405.01817, 2024 - arxiv.org
In adversarial machine learning, neural networks suffer from a significant issue known as
robust overfitting, where the robust test accuracy decreases over epochs (Rice et al., 2020) …

Regularization for adversarial robust learning

J Wang, R Gao, Y **e - arxiv preprint arxiv:2408.09672, 2024 - arxiv.org
Despite the growing prevalence of artificial neural networks in real-world applications, their
vulnerability to adversarial attacks remains a significant concern, which motivates us to …

Transformed low-rank parameterization can help robust generalization for tensor neural networks

A Wang, C Li, M Bai, Z **, G Zhou… - Advances in Neural …, 2023 - proceedings.neurips.cc
Multi-channel learning has gained significant attention in recent applications, where neural
networks with t-product layers (t-NNs) have shown promising performance through novel …

Bridging the gap: Rademacher complexity in robust and standard generalization

J **ao, Q Long, W Su - The Thirty Seventh Annual …, 2024 - proceedings.mlr.press
Abstract Training Deep Neural Networks (DNNs) with adversarial examples often results in
poor generalization to test-time adversarial data. This paper investigates this issue, known …