Learning mixtures of gaussians using diffusion models

K Gatmiry, J Kelner, H Lee - arxiv preprint arxiv:2404.18869, 2024‏ - arxiv.org
We give a new algorithm for learning mixtures of $ k $ Gaussians (with identity covariance in
$\mathbb {R}^ n $) to TV error $\varepsilon $, with quasi-polynomial ($ O (n^{\text {poly …

Randomly initialized EM algorithm for two-component Gaussian mixture achieves near optimality in iterations

Y Wu, HH Zhou - Mathematical Statistics and Learning, 2021‏ - ems.press
We analyze the classical EM algorithm for parameter estimation in the symmetric two-
component Gaussian mixtures in d dimensions. We show that, even in the absence of any …

Refined convergence rates for maximum likelihood estimation under finite mixture models

T Manole, N Ho - International Conference on Machine …, 2022‏ - proceedings.mlr.press
We revisit the classical problem of deriving convergence rates for the maximum likelihood
estimator (MLE) in finite mixture models. The Wasserstein distance has become a standard …

Multivariate, heteroscedastic empirical bayes via nonparametric maximum likelihood

JA Soloff, A Guntuboyina, B Sen - Journal of the Royal Statistical …, 2025‏ - academic.oup.com
Multivariate, heteroscedastic errors complicate statistical inference in many large-scale
denoizing problems. Empirical Bayes is attractive in such settings, but standard parametric …

Polynomial methods in statistical inference: Theory and practice

Y Wu, P Yang - Foundations and Trends® in …, 2020‏ - nowpublishers.com
This survey provides an exposition of a suite of techniques based on the theory of
polynomials, collectively referred to as polynomial methods, which have recently been …

Efficient algorithms for sparse moment problems without separation

Z Fan, J Li - The Thirty Sixth Annual Conference on Learning …, 2023‏ - proceedings.mlr.press
We consider the sparse moment problem of learning a $ k $-spike mixture in high-
dimensional space from its noisy moment information in any dimension. We measure the …

Reward-mixing mdps with few latent contexts are learnable

J Kwon, Y Efroni, C Caramanis… - … on Machine Learning, 2023‏ - proceedings.mlr.press
We consider episodic reinforcement learning in reward-mixing Markov decision processes
(RMMDPs): at the beginning of every episode nature randomly picks a latent reward model …

Lower bounds on the total variation distance between mixtures of two gaussians

S Davies, A Mazumdar, S Pal… - International …, 2022‏ - proceedings.mlr.press
Mixtures of high dimensional Gaussian distributions have been studied extensively in
statistics and learning theory. While the total variation distance appears naturally in the …

Entropic characterization of optimal rates for learning Gaussian mixtures

Z Jia, Y Polyanskiy, Y Wu - The Thirty Sixth Annual …, 2023‏ - proceedings.mlr.press
We consider the question of estimating multi-dimensional Gaussian mixtures (GM) with com-
pactly supported or subgaussian mixing distributions. Minimax estimation rate for this class …

Optimal estimation and computational limit of low-rank Gaussian mixtures

Z Lyu, D **a - The Annals of Statistics, 2023‏ - projecteuclid.org
Optimal estimation and computational limit of low-rank Gaussian mixtures Page 1 The
Annals of Statistics 2023, Vol. 51, No. 2, 646–667 https://doi.org/10.1214/23-AOS2264 © …