Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
The Bayesian learning rule
We show that many machine-learning algorithms are specific instances of a single algorithm
called the Bayesian learning rule. The rule, derived from Bayesian principles, yields a wide …
called the Bayesian learning rule. The rule, derived from Bayesian principles, yields a wide …
PRIOR: Personalized Prior for Reactivating the Information Overlooked in Federated Learning.
Classical federated learning (FL) enables training machine learning models without sharing
data for privacy preservation, but heterogeneous data characteristic degrades the …
data for privacy preservation, but heterogeneous data characteristic degrades the …
Sharp global convergence guarantees for iterative nonconvex optimization with random data
KA Chandrasekher, A Pananjady… - The Annals of …, 2023 - projecteuclid.org
Sharp global convergence guarantees for iterative nonconvex optimization with random data
Page 1 The Annals of Statistics 2023, Vol. 51, No. 1, 179–210 https://doi.org/10.1214/22-AOS2246 …
Page 1 The Annals of Statistics 2023, Vol. 51, No. 1, 179–210 https://doi.org/10.1214/22-AOS2246 …
Mirror descent with relative smoothness in measure spaces, with application to sinkhorn and em
Many problems in machine learning can be formulated as optimizing a convex functional
over a vector space of measures. This paper studies the convergence of the mirror descent …
over a vector space of measures. This paper studies the convergence of the mirror descent …
Stochastic approximation beyond gradient for signal processing and machine learning
Stochastic Approximation (SA) is a classical algorithm that has had since the early days a
huge impact on signal processing, and nowadays on machine learning, due to the necessity …
huge impact on signal processing, and nowadays on machine learning, due to the necessity …
Theoretical guarantees for variational inference with fixed-variance mixture of gaussians
Variational inference (VI) is a popular approach in Bayesian inference, that looks for the best
approximation of the posterior distribution within a parametric family, minimizing a loss that …
approximation of the posterior distribution within a parametric family, minimizing a loss that …
Federated-EM with heterogeneity mitigation and variance reduction
Abstract The Expectation Maximization (EM) algorithm is the default algorithm for inference
in latent variable models. As in any other field of machine learning, applications of latent …
in latent variable models. As in any other field of machine learning, applications of latent …
A Bregman proximal perspective on classical and quantum Blahut-Arimoto algorithms
The Blahut-Arimoto algorithm is a well-known method to compute classical channel
capacities and rate-distortion functions. Recent works have extended this algorithm to …
capacities and rate-distortion functions. Recent works have extended this algorithm to …
Sharp global convergence guarantees for iterative nonconvex optimization: A gaussian process perspective
KA Chandrasekher, A Pananjady… - arxiv preprint arxiv …, 2021 - arxiv.org
We consider a general class of regression models with normally distributed covariates, and
the associated nonconvex problem of fitting these models from data. We develop a general …
the associated nonconvex problem of fitting these models from data. We develop a general …
EM++: A parameter learning framework for stochastic switching systems
This paper proposes a general switching dynamical system model, and a custom
majorization-minimization-based algorithm EM++ for identifying its parameters. For certain …
majorization-minimization-based algorithm EM++ for identifying its parameters. For certain …