Cross-entropy loss functions: Theoretical analysis and applications

A Mao, M Mohri, Y Zhong - International conference on …, 2023 - proceedings.mlr.press
Cross-entropy is a widely used loss function in applications. It coincides with the logistic loss
applied to the outputs of a neural network, when the softmax is used. But, what guarantees …

Optimal learners for realizable regression: Pac learning and online learning

I Attias, S Hanneke, A Kalavasis… - Advances in …, 2023 - proceedings.neurips.cc
In this work, we aim to characterize the statistical complexity of realizable regression both in
the PAC learning setting and the online learning setting. Previous work had established the …

Improved generalization bounds for robust learning

I Attias, A Kontorovich… - Algorithmic Learning …, 2019 - proceedings.mlr.press
We consider a model of robust learning in an adversarial environment. The learner gets
uncorrupted training data with access to possible corruptions that may be effected by the …

Information complexity of stochastic convex optimization: Applications to generalization and memorization

I Attias, GK Dziugaite, M Haghifam, R Livni… - arxiv preprint arxiv …, 2024 - arxiv.org
In this work, we investigate the interplay between memorization and learning in the context
of\emph {stochastic convex optimization}(SCO). We define memorization via the information …

On the computability of robust pac learning

P Gourdeau, L Tosca, R Urner - The Thirty Seventh Annual …, 2024 - proceedings.mlr.press
We initiate the study of computability requirements for adversarially robust learning.
Adversarially robust PAC-type learnability is by now an established field of research …

Agnostic sample compression schemes for regression

I Attias, S Hanneke, A Kontorovich… - Forty-first International …, 2024 - openreview.net
We obtain the first positive results for bounded sample compression in the agnostic
regression setting with the $\ell_p $ loss, where $ p\in [1,\infty] $. We construct a generic …

Adversarially robust pac learnability of real-valued functions

I Attias, S Hanneke - International Conference on Machine …, 2023 - proceedings.mlr.press
We study robustness to test-time adversarial attacks in the regression setting with $\ell_p $
losses and arbitrary perturbation sets. We address the question of which function classes …

Uniformly stable algorithms for adversarial training and beyond

J **ao, J Zhang, ZQ Luo, A Ozdaglar - arxiv preprint arxiv:2405.01817, 2024 - arxiv.org
In adversarial machine learning, neural networks suffer from a significant issue known as
robust overfitting, where the robust test accuracy decreases over epochs (Rice et al., 2020) …

Adversarially robust learning with uncertain perturbation sets

T Lechner, V Pathak, R Urner - Advances in Neural …, 2023 - proceedings.neurips.cc
In many real-world settings exact perturbation sets to be used by an adversary are not
plausibly available to a learner. While prior literature has studied both scenarios with …

Improved generalization bounds for adversarially robust learning

I Attias, A Kontorovich, Y Mansour - Journal of Machine Learning Research, 2022 - jmlr.org
We consider a model of robust learning in an adversarial environment. The learner gets
uncorrupted training data with access to possible corruptions that may be affected by the …