Learning mixtures of gaussians using the DDPM objective

K Shah, S Chen, A Klivans - Advances in Neural …, 2023 - proceedings.neurips.cc
Recent works have shown that diffusion models can learn essentially any distribution
provided one can perform score estimation. Yet it remains poorly understood under what …

Dpgn: Distribution propagation graph network for few-shot learning

L Yang, L Li, Z Zhang, X Zhou… - Proceedings of the …, 2020 - openaccess.thecvf.com
Most graph-network-based meta-learning approaches model instance-level relation of
examples. We extend this idea further to explicitly model the distribution-level relation of one …

Spectral methods for data science: A statistical perspective

Y Chen, Y Chi, J Fan, C Ma - Foundations and Trends® in …, 2021 - nowpublishers.com
Spectral methods have emerged as a simple yet surprisingly effective approach for
extracting information from massive, noisy and incomplete data. In a nutshell, spectral …

Robust estimators in high-dimensions without the computational intractability

I Diakonikolas, G Kamath, D Kane, J Li, A Moitra… - SIAM Journal on …, 2019 - SIAM
We study high-dimensional distribution learning in an agnostic setting where an adversary is
allowed to arbitrarily corrupt an ε-fraction of the samples. Such questions have a rich history …

[PDF][PDF] Tensor decompositions for learning latent variable models.

A Anandkumar, R Ge, DJ Hsu, SM Kakade… - J. Mach. Learn. Res …, 2014 - jmlr.org
This work considers a computationally and statistically efficient parameter estimation method
for a wide class of latent variable models—including Gaussian mixture models, hidden …

Agnostic estimation of mean and covariance

KA Lai, AB Rao, S Vempala - 2016 IEEE 57th Annual …, 2016 - ieeexplore.ieee.org
We consider the problem of estimating the mean and covariance of a distribution from iid
samples in the presence of a fraction of malicious noise. This is in contrast to much recent …

Statistical query lower bounds for robust estimation of high-dimensional gaussians and gaussian mixtures

I Diakonikolas, DM Kane… - 2017 IEEE 58th Annual …, 2017 - ieeexplore.ieee.org
We describe a general technique that yields the first Statistical Query lower bounds for a
range of fundamental high-dimensional learning problems involving Gaussian distributions …

[PDF][PDF] Multi-objective reinforcement learning using sets of pareto dominating policies

K Van Moffaert, A Nowé - The Journal of Machine Learning Research, 2014 - jmlr.org
Many real-world problems involve the optimization of multiple, possibly conflicting
objectives. Multi-objective reinforcement learning (MORL) is a generalization of standard …

The limitations of adversarial training and the blind-spot attack

H Zhang, H Chen, Z Song, D Boning, IS Dhillon… - arxiv preprint arxiv …, 2019 - arxiv.org
The adversarial training procedure proposed by Madry et al.(2018) is one of the most
effective methods to defend against adversarial examples in deep neural networks (DNNs) …

Mixture models, robustness, and sum of squares proofs

SB Hopkins, J Li - Proceedings of the 50th Annual ACM SIGACT …, 2018 - dl.acm.org
We use the Sum of Squares method to develop new efficient algorithms for learning well-
separated mixtures of Gaussians and robust mean estimation, both in high dimensions, that …