Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Learning mixtures of gaussians using diffusion models
We give a new algorithm for learning mixtures of $ k $ Gaussians (with identity covariance in
$\mathbb {R}^ n $) to TV error $\varepsilon $, with quasi-polynomial ($ O (n^{\text {poly …
$\mathbb {R}^ n $) to TV error $\varepsilon $, with quasi-polynomial ($ O (n^{\text {poly …
Randomly initialized EM algorithm for two-component Gaussian mixture achieves near optimality in iterations
We analyze the classical EM algorithm for parameter estimation in the symmetric two-
component Gaussian mixtures in d dimensions. We show that, even in the absence of any …
component Gaussian mixtures in d dimensions. We show that, even in the absence of any …
Refined convergence rates for maximum likelihood estimation under finite mixture models
We revisit the classical problem of deriving convergence rates for the maximum likelihood
estimator (MLE) in finite mixture models. The Wasserstein distance has become a standard …
estimator (MLE) in finite mixture models. The Wasserstein distance has become a standard …
Multivariate, heteroscedastic empirical bayes via nonparametric maximum likelihood
Multivariate, heteroscedastic errors complicate statistical inference in many large-scale
denoizing problems. Empirical Bayes is attractive in such settings, but standard parametric …
denoizing problems. Empirical Bayes is attractive in such settings, but standard parametric …
Polynomial methods in statistical inference: Theory and practice
This survey provides an exposition of a suite of techniques based on the theory of
polynomials, collectively referred to as polynomial methods, which have recently been …
polynomials, collectively referred to as polynomial methods, which have recently been …
Efficient algorithms for sparse moment problems without separation
We consider the sparse moment problem of learning a $ k $-spike mixture in high-
dimensional space from its noisy moment information in any dimension. We measure the …
dimensional space from its noisy moment information in any dimension. We measure the …
Reward-mixing mdps with few latent contexts are learnable
We consider episodic reinforcement learning in reward-mixing Markov decision processes
(RMMDPs): at the beginning of every episode nature randomly picks a latent reward model …
(RMMDPs): at the beginning of every episode nature randomly picks a latent reward model …
Lower bounds on the total variation distance between mixtures of two gaussians
Mixtures of high dimensional Gaussian distributions have been studied extensively in
statistics and learning theory. While the total variation distance appears naturally in the …
statistics and learning theory. While the total variation distance appears naturally in the …
Entropic characterization of optimal rates for learning Gaussian mixtures
We consider the question of estimating multi-dimensional Gaussian mixtures (GM) with com-
pactly supported or subgaussian mixing distributions. Minimax estimation rate for this class …
pactly supported or subgaussian mixing distributions. Minimax estimation rate for this class …
Optimal estimation and computational limit of low-rank Gaussian mixtures
Optimal estimation and computational limit of low-rank Gaussian mixtures Page 1 The
Annals of Statistics 2023, Vol. 51, No. 2, 646–667 https://doi.org/10.1214/23-AOS2264 © …
Annals of Statistics 2023, Vol. 51, No. 2, 646–667 https://doi.org/10.1214/23-AOS2264 © …