On the validation of Gibbs algorithms: Training datasets, test datasets and their aggregation

SM Perlaza, I Esnaola, G Bisson… - 2023 IEEE International …, 2023‏ - ieeexplore.ieee.org
The dependence on training data of the Gibbs algorithm (GA) is analytically characterized.
By adopting the expected empirical risk as the performance metric, the sensitivity of the GA …

Empirical risk minimization with relative entropy regularization

SM Perlaza, G Bisson, I Esnaola… - IEEE Transactions …, 2024‏ - ieeexplore.ieee.org
The empirical risk minimization (ERM) problem with relative entropy regularization (ERM-
RER) is investigated under the assumption that the reference measure is a-finite measure …

The generalization error of machine learning algorithms

SM Perlaza, X Zou - arxiv preprint arxiv:2411.12030, 2024‏ - arxiv.org
In this paper, the method of gaps, a technique for deriving closed-form expressions in terms
of information measures for the generalization error of machine learning algorithms is …

Analysis of the relative entropy asymmetry in the regularization of empirical risk minimization

F Daunas, I Esnaola, SM Perlaza… - 2023 IEEE International …, 2023‏ - ieeexplore.ieee.org
The effect of the relative entropy asymmetry is analyzed in the empirical risk minimization
with relative entropy regularization (ERM-RER) problem. A novel regularization is …

Asymmetry of the relative entropy in the regularization of empirical risk minimization

F Daunas, I Esnaola, SM Perlaza, HV Poor - arxiv preprint arxiv …, 2024‏ - arxiv.org
The effect of relative entropy asymmetry is analyzed in the context of empirical risk
minimization (ERM) with relative entropy regularization (ERM-RER). Two regularizations are …

Empirical risk minimization with relative entropy regularization type-II

F Daunas, I Esnaola, SM Perlaza, HV Poor - 2023‏ - hal.science
The effect of the relative entropy asymmetry is analyzed in the empirical risk minimization
with relative entropy regularization (ERM-RER) problem. A novel regularization is …

On the generalization error of meta learning for the Gibbs algorithm

Y Bu, HV Tetali, G Aminian… - … on Information Theory …, 2023‏ - ieeexplore.ieee.org
We analyze the generalization ability of joint-training meta learning algorithms via the Gibbs
algorithm. Our exact characterization of the expected meta generalization error for the meta …

Towards optimal inverse temperature in the Gibbs algorithm

Y Bu - 2024 IEEE International Symposium on Information …, 2024‏ - ieeexplore.ieee.org
This paper explores the problem of selecting optimal hyperparameters in the Gibbs
algorithm to minimize the population risk, specifically focusing on the inverse temperature …

Information-theoretic Analysis of Bayesian Test Data Sensitivity

F Futami, T Iwata - International Conference on Artificial …, 2024‏ - proceedings.mlr.press
Bayesian inference is often used to quantify uncertainty. Several recent analyses have
rigorously decomposed uncertainty in prediction by Bayesian inference into two types: the …

An exact characterization of the generalization error of machine learning algorithms

X Zou, SM Perlaza, I Esnaola, E Altman, HV Poor - 2024‏ - inria.hal.science
The worst-case data-generating (WCDG) probability measure is introduced as a tool for
characterizing the generalization capabilities of machine learning algorithms. Such a WCDG …