Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Generalization bounds: Perspectives from information theory and PAC-Bayes
A fundamental question in theoretical machine learning is generalization. Over the past
decades, the PAC-Bayesian approach has been established as a flexible framework to …
decades, the PAC-Bayesian approach has been established as a flexible framework to …
Generalized semi-supervised learning via self-supervised feature adaptation
Traditional semi-supervised learning (SSL) assumes that the feature distributions of labeled
and unlabeled data are consistent which rarely holds in realistic scenarios. In this paper, we …
and unlabeled data are consistent which rarely holds in realistic scenarios. In this paper, we …
Towards generalization beyond pointwise learning: A unified information-theoretic perspective
The recent surge in contrastive learning has intensified the interest in understanding the
generalization of non-pointwise learning paradigms. While information-theoretic analysis …
generalization of non-pointwise learning paradigms. While information-theoretic analysis …
Information-theoretic characterizations of generalization error for the Gibbs algorithm
Various approaches have been developed to upper bound the generalization error of a
supervised learning algorithm. However, existing bounds are often loose and even vacuous …
supervised learning algorithm. However, existing bounds are often loose and even vacuous …
Domain adaptation with domain specific information and feature disentanglement for bearing fault diagnosis
S **e, P **a, H Zhang - Measurement Science and Technology, 2024 - iopscience.iop.org
Collecting bearing fault signals from several rotating machines or under varied operating
conditions often results in data distribution offset. Furthermore, the newly obtained data is …
conditions often results in data distribution offset. Furthermore, the newly obtained data is …
In all likelihoods: Robust selection of pseudo-labeled data
Self-training is a simple yet effective method within semi-supervised learning. Self-training's
rationale is to iteratively enhance training data by adding pseudo-labeled data. Its …
rationale is to iteratively enhance training data by adding pseudo-labeled data. Its …
Approximately Bayes-optimal pseudo-label selection
Semi-supervised learning by self-training heavily relies on pseudo-label selection (PLS).
This selection often depends on the initial model fit on labeled data. Early overfitting might …
This selection often depends on the initial model fit on labeled data. Early overfitting might …
Learning under distribution mismatch and model misspecification
We study learning algorithms when there is a mismatch between the distributions of the
training and test datasets of a learning algorithm. The effect of this mismatch on the …
training and test datasets of a learning algorithm. The effect of this mismatch on the …
Information-theoretic generalization bounds for transductive learning and its applications
In this paper, we establish generalization bounds for transductive learning algorithms in the
context of information theory and PAC-Bayes, covering both the random sampling and the …
context of information theory and PAC-Bayes, covering both the random sampling and the …
Information-theoretic characterization of the generalization error for iterative semi-supervised learning
Using information-theoretic principles, we consider the generalization error (gen-error) of
iterative semi-supervised learning (SSL) algorithms that iteratively generate pseudolabels …
iterative semi-supervised learning (SSL) algorithms that iteratively generate pseudolabels …