User-friendly introduction to PAC-Bayes bounds

P Alquier - Foundations and Trends® in Machine Learning, 2024 - nowpublishers.com
Aggregated predictors are obtained by making a set of basic predictors vote according to
some weights, that is, to some probability distribution. Randomized predictors are obtained …

Generalization bounds: Perspectives from information theory and PAC-Bayes

F Hellström, G Durisi, B Guedj… - … and Trends® in …, 2025 - nowpublishers.com
A fundamental question in theoretical machine learning is generalization. Over the past
decades, the PAC-Bayesian approach has been established as a flexible framework to …

A unified recipe for deriving (time-uniform) PAC-Bayes bounds

B Chugg, H Wang, A Ramdas - Journal of Machine Learning Research, 2023 - jmlr.org
We present a unified framework for deriving PAC-Bayesian generalization bounds. Unlike
most previous literature on this topic, our bounds are anytime-valid (ie, time-uniform) …

Tighter PAC-Bayes bounds through coin-betting

K Jang, KS Jun, I Kuzborskij… - The Thirty Sixth Annual …, 2023 - proceedings.mlr.press
We consider the problem of estimating the mean of a sequence of random elements $ f
(\theta, X_1) $$,\ldots, $$ f (\theta, X_n) $ where $ f $ is a fixed scalar function …

Online pac-bayes learning

M Haddouche, B Guedj - Advances in Neural Information …, 2022 - proceedings.neurips.cc
Most PAC-Bayesian bounds hold in the batch learning setting where data is collected at
once, prior to inference or prediction. This somewhat departs from many contemporary …

PAC-Bayes generalisation bounds for heavy-tailed losses through supermartingales

M Haddouche, B Guedj - arxiv preprint arxiv:2210.00928, 2022 - arxiv.org
While PAC-Bayes is now an established learning framework for light-tailed losses (\emph
{eg}, subgaussian or subexponential), its extension to the case of heavy-tailed losses …

PAC-Bayes analysis beyond the usual bounds

O Rivasplata, I Kuzborskij… - Advances in …, 2020 - proceedings.neurips.cc
We focus on a stochastic learning model where the learner observes a finite set of training
examples and the output of the learning process is a data-dependent distribution over a …

A new family of generalization bounds using samplewise evaluated CMI

F Hellström, G Durisi - Advances in Neural Information …, 2022 - proceedings.neurips.cc
We present a new family of information-theoretic generalization bounds, in which the
training loss and the population loss are compared through a jointly convex function. This …

Learning via Wasserstein-based high probability generalisation bounds

P Viallard, M Haddouche… - Advances in Neural …, 2023 - proceedings.neurips.cc
Minimising upper bounds on the population risk or the generalisation gap has been widely
used in structural risk minimisation (SRM)--this is in particular at the core of PAC-Bayesian …

Sample-conditioned hypothesis stability sharpens information-theoretic generalization bounds

Z Wang, Y Mao - Advances in Neural Information …, 2024 - proceedings.neurips.cc
We present new information-theoretic generalization guarantees through the a novel
construction of the" neighboring-hypothesis" matrix and a new family of stability notions …