The curse of overparametrization in adversarial training: Precise analysis of robust generalization for random features regression

H Hassani, A Javanmard - The Annals of Statistics, 2024 - projecteuclid.org
The curse of overparametrization in adversarial training: Precise analysis of robust
generalization for random features regressi Page 1 The Annals of Statistics 2024, Vol. 52, No. 2 …

Benign overfitting in adversarial training of neural networks

Y Wang, K Zhang, R Arora - Forty-first International Conference on …, 2024 - openreview.net
Benign overfitting is the phenomenon wherein none of the predictors in the hypothesis class
can achieve perfect accuracy (ie, non-realizable or noisy setting), but a model that …

Beyond the universal law of robustness: Sharper laws for random features and neural tangent kernels

S Bombari, S Kiyani, M Mondelli - … Conference on Machine …, 2023 - proceedings.mlr.press
Abstract Machine learning models are vulnerable to adversarial perturbations, and a thought-
provoking paper by Bubeck and Sellke has analyzed this phenomenon through the lens of …

Why adversarial training can hurt robust accuracy

J Clarysse, J Hörrmann, F Yang - arxiv preprint arxiv:2203.02006, 2022 - arxiv.org
Machine learning classifiers with high test accuracy often perform poorly under adversarial
attacks. It is commonly believed that adversarial training alleviates this issue. In this paper …

The surprising harmfulness of benign overfitting for adversarial robustness

Y Hao, T Zhang - arxiv preprint arxiv:2401.12236, 2024 - arxiv.org
Recent empirical and theoretical studies have established the generalization capabilities of
large machine learning models that are trained to (approximately or exactly) fit noisy data. In …

Towards unlocking the mystery of adversarial fragility of neural networks

J Gao, R Mudumbai, X Wu, J Yi, C Xu, H **e… - arxiv preprint arxiv …, 2024 - arxiv.org
In this paper, we study the adversarial robustness of deep neural networks for classification
tasks. We look at the smallest magnitude of possible additive perturbations that can change …

Margin-based sampling in high dimensions: When being active is less efficient than staying passive

A Tifrea, J Clarysse, F Yang - International Conference on …, 2023 - proceedings.mlr.press
It is widely believed that given the same labeling budget, active learning (AL) algorithms like
margin-based active learning achieve better predictive performance than passive learning …

Rethinking cost-sensitive classification in deep learning via adversarial data augmentation

Q Chen, R Al Kontar, M Nouiehed… - … Journal on Data …, 2024 - pubsonline.informs.org
Cost-sensitive classification is critical in applications where misclassification errors widely
vary in cost. However, overparameterization poses fundamental challenges to the cost …

Interpolation and regularization for causal learning

L Chennuru Vankadara, L Rendsburg… - Advances in …, 2022 - proceedings.neurips.cc
Recent work shows that in complex model classes, interpolators can achieve statistical
generalization and even be optimal for statistical learning. However, despite increasing …

Efficient regression with deep neural networks: how many datapoints do we need?

D Lengyel, A Borovykh - Has it Trained Yet? NeurIPS 2022 …, 2022 - openreview.net
While large datasets facilitate the learning of a robust representation of the data manifold,
the ability to obtain similar performance over small datasets is clearly computationally …