Sampling is as easy as learning the score: theory for diffusion models with minimal data assumptions

S Chen, S Chewi, J Li, Y Li, A Salim… - arxiv preprint arxiv …, 2022 - arxiv.org
We provide theoretical convergence guarantees for score-based generative models (SGMs)
such as denoising diffusion probabilistic models (DDPMs), which constitute the backbone of …

Improved analysis of score-based generative modeling: User-friendly bounds under minimal smoothness assumptions

H Chen, H Lee, J Lu - International Conference on Machine …, 2023 - proceedings.mlr.press
We give an improved theoretical analysis of score-based generative modeling. Under a
score estimate with small $ L^ 2$ error (averaged across timesteps), we provide efficient …

Convergence for score-based generative modeling with polynomial complexity

H Lee, J Lu, Y Tan - Advances in Neural Information …, 2022 - proceedings.neurips.cc
Score-based generative modeling (SGM) is a highly successful approach for learning a
probability distribution from data and generating further samples. We prove the first …

Restoration-degradation beyond linear diffusions: A non-asymptotic analysis for ddim-type samplers

S Chen, G Daras, A Dimakis - International Conference on …, 2023 - proceedings.mlr.press
We develop a framework for non-asymptotic analysis of deterministic samplers used for
diffusion generative modeling. Several recent works have analyzed stochastic samplers …

An invitation to sequential Monte Carlo samplers

C Dai, J Heng, PE Jacob, N Whiteley - Journal of the American …, 2022 - Taylor & Francis
ABSTRACT Statisticians often use Monte Carlo methods to approximate probability
distributions, primarily with Markov chain Monte Carlo and importance sampling. Sequential …

Linear convergence bounds for diffusion models via stochastic localization

J Benton, V De Bortoli, A Doucet… - arxiv preprint arxiv …, 2023 - arxiv.org
Diffusion models are a powerful method for generating approximate samples from high-
dimensional data distributions. Several recent results have provided polynomial bounds on …

Analysis of langevin monte carlo from poincare to log-sobolev

S Chewi, MA Erdogdu, M Li, R Shen… - Foundations of …, 2024 - Springer
Classically, the continuous-time Langevin diffusion converges exponentially fast to its
stationary distribution π under the sole assumption that π satisfies a Poincaré inequality …

Variational inference via Wasserstein gradient flows

M Lambert, S Chewi, F Bach… - Advances in Neural …, 2022 - proceedings.neurips.cc
Abstract Along with Markov chain Monte Carlo (MCMC) methods, variational inference (VI)
has emerged as a central computational approach to large-scale Bayesian inference …

Towards a theory of non-log-concave sampling: first-order stationarity guarantees for langevin monte carlo

K Balasubramanian, S Chewi… - … on Learning Theory, 2022 - proceedings.mlr.press
For the task of sampling from a density $\pi\propto\exp (-V) $ on $\R^ d $, where $ V $ is
possibly non-convex but $ L $-gradient Lipschitz, we prove that averaged Langevin Monte …

Faster high-accuracy log-concave sampling via algorithmic warm starts

JM Altschuler, S Chewi - Journal of the ACM, 2024 - dl.acm.org
It is a fundamental problem to understand the complexity of high-accuracy sampling from a
strongly log-concave density π on ℝ d. Indeed, in practice, high-accuracy samplers such as …