Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
The curse of overparametrization in adversarial training: Precise analysis of robust generalization for random features regression
The curse of overparametrization in adversarial training: Precise analysis of robust
generalization for random features regressi Page 1 The Annals of Statistics 2024, Vol. 52, No. 2 …
generalization for random features regressi Page 1 The Annals of Statistics 2024, Vol. 52, No. 2 …
Benign overfitting in adversarial training of neural networks
Benign overfitting is the phenomenon wherein none of the predictors in the hypothesis class
can achieve perfect accuracy (ie, non-realizable or noisy setting), but a model that …
can achieve perfect accuracy (ie, non-realizable or noisy setting), but a model that …
Beyond the universal law of robustness: Sharper laws for random features and neural tangent kernels
Abstract Machine learning models are vulnerable to adversarial perturbations, and a thought-
provoking paper by Bubeck and Sellke has analyzed this phenomenon through the lens of …
provoking paper by Bubeck and Sellke has analyzed this phenomenon through the lens of …
Why adversarial training can hurt robust accuracy
J Clarysse, J Hörrmann, F Yang - arxiv preprint arxiv:2203.02006, 2022 - arxiv.org
Machine learning classifiers with high test accuracy often perform poorly under adversarial
attacks. It is commonly believed that adversarial training alleviates this issue. In this paper …
attacks. It is commonly believed that adversarial training alleviates this issue. In this paper …
The surprising harmfulness of benign overfitting for adversarial robustness
Recent empirical and theoretical studies have established the generalization capabilities of
large machine learning models that are trained to (approximately or exactly) fit noisy data. In …
large machine learning models that are trained to (approximately or exactly) fit noisy data. In …
Towards unlocking the mystery of adversarial fragility of neural networks
In this paper, we study the adversarial robustness of deep neural networks for classification
tasks. We look at the smallest magnitude of possible additive perturbations that can change …
tasks. We look at the smallest magnitude of possible additive perturbations that can change …
Margin-based sampling in high dimensions: When being active is less efficient than staying passive
It is widely believed that given the same labeling budget, active learning (AL) algorithms like
margin-based active learning achieve better predictive performance than passive learning …
margin-based active learning achieve better predictive performance than passive learning …
Rethinking cost-sensitive classification in deep learning via adversarial data augmentation
Cost-sensitive classification is critical in applications where misclassification errors widely
vary in cost. However, overparameterization poses fundamental challenges to the cost …
vary in cost. However, overparameterization poses fundamental challenges to the cost …
Interpolation and regularization for causal learning
Recent work shows that in complex model classes, interpolators can achieve statistical
generalization and even be optimal for statistical learning. However, despite increasing …
generalization and even be optimal for statistical learning. However, despite increasing …
Efficient regression with deep neural networks: how many datapoints do we need?
While large datasets facilitate the learning of a robust representation of the data manifold,
the ability to obtain similar performance over small datasets is clearly computationally …
the ability to obtain similar performance over small datasets is clearly computationally …