Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
User-friendly introduction to PAC-Bayes bounds
P Alquier - Foundations and Trends® in Machine Learning, 2024 - nowpublishers.com
Aggregated predictors are obtained by making a set of basic predictors vote according to
some weights, that is, to some probability distribution. Randomized predictors are obtained …
some weights, that is, to some probability distribution. Randomized predictors are obtained …
Generalization bounds: Perspectives from information theory and PAC-Bayes
A fundamental question in theoretical machine learning is generalization. Over the past
decades, the PAC-Bayesian approach has been established as a flexible framework to …
decades, the PAC-Bayesian approach has been established as a flexible framework to …
A unified recipe for deriving (time-uniform) PAC-Bayes bounds
We present a unified framework for deriving PAC-Bayesian generalization bounds. Unlike
most previous literature on this topic, our bounds are anytime-valid (ie, time-uniform) …
most previous literature on this topic, our bounds are anytime-valid (ie, time-uniform) …
Non-vacuous generalization bounds for large language models
Modern language models can contain billions of parameters, raising the question of whether
they can generalize beyond the training data or simply parrot their training corpora. We …
they can generalize beyond the training data or simply parrot their training corpora. We …
Mission impossible: A statistical perspective on jailbreaking llms
Large language models (LLMs) are trained on a deluge of text data with limited quality
control. As a result, LLMs can exhibit unintended or even harmful behaviours, such as …
control. As a result, LLMs can exhibit unintended or even harmful behaviours, such as …
PAC-Bayes generalisation bounds for heavy-tailed losses through supermartingales
M Haddouche, B Guedj - arxiv preprint arxiv:2210.00928, 2022 - arxiv.org
While PAC-Bayes is now an established learning framework for light-tailed losses (\emph
{eg}, subgaussian or subexponential), its extension to the case of heavy-tailed losses …
{eg}, subgaussian or subexponential), its extension to the case of heavy-tailed losses …
Generalization bounds for meta-learning via pac-bayes and uniform stability
A Farid, A Majumdar - Advances in neural information …, 2021 - proceedings.neurips.cc
We are motivated by the problem of providing strong generalization guarantees in the
context of meta-learning. Existing generalization bounds are either challenging to evaluate …
context of meta-learning. Existing generalization bounds are either challenging to evaluate …
A new family of generalization bounds using samplewise evaluated CMI
F Hellström, G Durisi - Advances in Neural Information …, 2022 - proceedings.neurips.cc
We present a new family of information-theoretic generalization bounds, in which the
training loss and the population loss are compared through a jointly convex function. This …
training loss and the population loss are compared through a jointly convex function. This …
Learning via Wasserstein-based high probability generalisation bounds
P Viallard, M Haddouche… - Advances in Neural …, 2023 - proceedings.neurips.cc
Minimising upper bounds on the population risk or the generalisation gap has been widely
used in structural risk minimisation (SRM)--this is in particular at the core of PAC-Bayesian …
used in structural risk minimisation (SRM)--this is in particular at the core of PAC-Bayesian …
Generalization analysis of machine learning algorithms via the worst-case data-generating probability measure
In this paper, the worst-case probability measure over the data is introduced as a tool for
characterizing the generalization capabilities of machine learning algorithms. More …
characterizing the generalization capabilities of machine learning algorithms. More …