Sampling is as easy as learning the score: theory for diffusion models with minimal data assumptions
We provide theoretical convergence guarantees for score-based generative models (SGMs)
such as denoising diffusion probabilistic models (DDPMs), which constitute the backbone of …
such as denoising diffusion probabilistic models (DDPMs), which constitute the backbone of …
Improved analysis of score-based generative modeling: User-friendly bounds under minimal smoothness assumptions
We give an improved theoretical analysis of score-based generative modeling. Under a
score estimate with small $ L^ 2$ error (averaged across timesteps), we provide efficient …
score estimate with small $ L^ 2$ error (averaged across timesteps), we provide efficient …
Convergence for score-based generative modeling with polynomial complexity
Score-based generative modeling (SGM) is a highly successful approach for learning a
probability distribution from data and generating further samples. We prove the first …
probability distribution from data and generating further samples. We prove the first …
Restoration-degradation beyond linear diffusions: A non-asymptotic analysis for ddim-type samplers
We develop a framework for non-asymptotic analysis of deterministic samplers used for
diffusion generative modeling. Several recent works have analyzed stochastic samplers …
diffusion generative modeling. Several recent works have analyzed stochastic samplers …
An invitation to sequential Monte Carlo samplers
ABSTRACT Statisticians often use Monte Carlo methods to approximate probability
distributions, primarily with Markov chain Monte Carlo and importance sampling. Sequential …
distributions, primarily with Markov chain Monte Carlo and importance sampling. Sequential …
Linear convergence bounds for diffusion models via stochastic localization
Diffusion models are a powerful method for generating approximate samples from high-
dimensional data distributions. Several recent results have provided polynomial bounds on …
dimensional data distributions. Several recent results have provided polynomial bounds on …
Analysis of langevin monte carlo from poincare to log-sobolev
Classically, the continuous-time Langevin diffusion converges exponentially fast to its
stationary distribution π under the sole assumption that π satisfies a Poincaré inequality …
stationary distribution π under the sole assumption that π satisfies a Poincaré inequality …
Variational inference via Wasserstein gradient flows
Abstract Along with Markov chain Monte Carlo (MCMC) methods, variational inference (VI)
has emerged as a central computational approach to large-scale Bayesian inference …
has emerged as a central computational approach to large-scale Bayesian inference …
Towards a theory of non-log-concave sampling: first-order stationarity guarantees for langevin monte carlo
K Balasubramanian, S Chewi… - … on Learning Theory, 2022 - proceedings.mlr.press
For the task of sampling from a density $\pi\propto\exp (-V) $ on $\R^ d $, where $ V $ is
possibly non-convex but $ L $-gradient Lipschitz, we prove that averaged Langevin Monte …
possibly non-convex but $ L $-gradient Lipschitz, we prove that averaged Langevin Monte …
Faster high-accuracy log-concave sampling via algorithmic warm starts
JM Altschuler, S Chewi - Journal of the ACM, 2024 - dl.acm.org
It is a fundamental problem to understand the complexity of high-accuracy sampling from a
strongly log-concave density π on ℝ d. Indeed, in practice, high-accuracy samplers such as …
strongly log-concave density π on ℝ d. Indeed, in practice, high-accuracy samplers such as …