User-friendly introduction to PAC-Bayes bounds
P Alquier - Foundations and Trends® in Machine Learning, 2024 - nowpublishers.com
Aggregated predictors are obtained by making a set of basic predictors vote according to
some weights, that is, to some probability distribution. Randomized predictors are obtained …
some weights, that is, to some probability distribution. Randomized predictors are obtained …
Generalization bounds: Perspectives from information theory and PAC-Bayes
A fundamental question in theoretical machine learning is generalization. Over the past
decades, the PAC-Bayesian approach has been established as a flexible framework to …
decades, the PAC-Bayesian approach has been established as a flexible framework to …
A new family of generalization bounds using samplewise evaluated CMI
We present a new family of information-theoretic generalization bounds, in which the
training loss and the population loss are compared through a jointly convex function. This …
training loss and the population loss are compared through a jointly convex function. This …
Sample-conditioned hypothesis stability sharpens information-theoretic generalization bounds
We present new information-theoretic generalization guarantees through the a novel
construction of the" neighboring-hypothesis" matrix and a new family of stability notions …
construction of the" neighboring-hypothesis" matrix and a new family of stability notions …
Limitations of information-theoretic generalization bounds for gradient descent methods in stochastic convex optimization
To date, no “information-theoretic” frameworks for reasoning about generalization error have
been shown to establish minimax rates for gradient descent in the setting of stochastic …
been shown to establish minimax rates for gradient descent in the setting of stochastic …
Single trajectory nonparametric learning of nonlinear dynamics
Given a single trajectory of a dynamical system, we analyze the performance of the
nonparametric least squares estimator (LSE). More precisely, we give nonasymptotic …
nonparametric least squares estimator (LSE). More precisely, we give nonasymptotic …
Minimum description length and generalization guarantees for representation learning
M Sefidgaran, A Zaidi… - Advances in Neural …, 2024 - proceedings.neurips.cc
A major challenge in designing efficient statistical supervised learning algorithms is finding
representations that perform well not only on available training samples but also on unseen …
representations that perform well not only on available training samples but also on unseen …
Online-to-PAC conversions: Generalization bounds via regret analysis
We present a new framework for deriving bounds on the generalization bound of statistical
learning algorithms from the perspective of online learning. Specifically, we construct an …
learning algorithms from the perspective of online learning. Specifically, we construct an …
Information-theoretic generalization bounds for learning from quantum data
Learning tasks play an increasingly prominent role in quantum information and computation.
They range from fundamental problems such as state discrimination and metrology over the …
They range from fundamental problems such as state discrimination and metrology over the …
Tighter information-theoretic generalization bounds from supersamples
In this work, we present a variety of novel information-theoretic generalization bounds for
learning algorithms, from the supersample setting of Steinke & Zakynthinou (2020)-the …
learning algorithms, from the supersample setting of Steinke & Zakynthinou (2020)-the …