Learning mixtures of gaussians using the DDPM objective
Recent works have shown that diffusion models can learn essentially any distribution
provided one can perform score estimation. Yet it remains poorly understood under what …
provided one can perform score estimation. Yet it remains poorly understood under what …
Dpgn: Distribution propagation graph network for few-shot learning
Most graph-network-based meta-learning approaches model instance-level relation of
examples. We extend this idea further to explicitly model the distribution-level relation of one …
examples. We extend this idea further to explicitly model the distribution-level relation of one …
Spectral methods for data science: A statistical perspective
Spectral methods have emerged as a simple yet surprisingly effective approach for
extracting information from massive, noisy and incomplete data. In a nutshell, spectral …
extracting information from massive, noisy and incomplete data. In a nutshell, spectral …
Robust estimators in high-dimensions without the computational intractability
We study high-dimensional distribution learning in an agnostic setting where an adversary is
allowed to arbitrarily corrupt an ε-fraction of the samples. Such questions have a rich history …
allowed to arbitrarily corrupt an ε-fraction of the samples. Such questions have a rich history …
[PDF][PDF] Tensor decompositions for learning latent variable models.
This work considers a computationally and statistically efficient parameter estimation method
for a wide class of latent variable models—including Gaussian mixture models, hidden …
for a wide class of latent variable models—including Gaussian mixture models, hidden …
Agnostic estimation of mean and covariance
We consider the problem of estimating the mean and covariance of a distribution from iid
samples in the presence of a fraction of malicious noise. This is in contrast to much recent …
samples in the presence of a fraction of malicious noise. This is in contrast to much recent …
Statistical query lower bounds for robust estimation of high-dimensional gaussians and gaussian mixtures
We describe a general technique that yields the first Statistical Query lower bounds for a
range of fundamental high-dimensional learning problems involving Gaussian distributions …
range of fundamental high-dimensional learning problems involving Gaussian distributions …
[PDF][PDF] Multi-objective reinforcement learning using sets of pareto dominating policies
Many real-world problems involve the optimization of multiple, possibly conflicting
objectives. Multi-objective reinforcement learning (MORL) is a generalization of standard …
objectives. Multi-objective reinforcement learning (MORL) is a generalization of standard …
The limitations of adversarial training and the blind-spot attack
The adversarial training procedure proposed by Madry et al.(2018) is one of the most
effective methods to defend against adversarial examples in deep neural networks (DNNs) …
effective methods to defend against adversarial examples in deep neural networks (DNNs) …
Mixture models, robustness, and sum of squares proofs
We use the Sum of Squares method to develop new efficient algorithms for learning well-
separated mixtures of Gaussians and robust mean estimation, both in high dimensions, that …
separated mixtures of Gaussians and robust mean estimation, both in high dimensions, that …