Analysis of langevin monte carlo from poincare to log-sobolev

S Chewi, MA Erdogdu, M Li, R Shen… - Foundations of …, 2024 - Springer
Classically, the continuous-time Langevin diffusion converges exponentially fast to its
stationary distribution π under the sole assumption that π satisfies a Poincaré inequality …

Faster high-accuracy log-concave sampling via algorithmic warm starts

JM Altschuler, S Chewi - Journal of the ACM, 2024 - dl.acm.org
It is a fundamental problem to understand the complexity of high-accuracy sampling from a
strongly log-concave density π on ℝ d. Indeed, in practice, high-accuracy samplers such as …

On the convergence of langevin monte carlo: The interplay between tail growth and smoothness

MA Erdogdu, R Hosseinzadeh - Conference on Learning …, 2021 - proceedings.mlr.press
We study sampling from a target distribution $\nu_*= e^{-f} $ using the unadjusted Langevin
Monte Carlo (LMC) algorithm. For any potential function $ f $ whose tails behave like …

Resolving the mixing time of the Langevin algorithm to its stationary distribution for log-concave sampling

JM Altschuler, K Talwar - arxiv preprint arxiv:2210.08448, 2022 - arxiv.org
Sampling from a high-dimensional distribution is a fundamental task in statistics,
engineering, and the sciences. A canonical approach is the Langevin Algorithm, ie, the …

High-order Langevin diffusion yields an accelerated MCMC algorithm

W Mou, YA Ma, MJ Wainwright, PL Bartlett… - Journal of Machine …, 2021 - jmlr.org
We propose a Markov chain Monte Carlo (MCMC) algorithm based on third-order Langevin
dynamics for sampling from distributions with smooth, log-concave densities. The higher …

On the ergodicity, bias and asymptotic normality of randomized midpoint sampling method

Y He, K Balasubramanian… - Advances in Neural …, 2020 - proceedings.neurips.cc
The randomized midpoint method, proposed by (Shen and Lee, 2019), has emerged as an
optimal discretization procedure for simulating the continuous time underdamped Langevin …

Convergence of Langevin Monte Carlo in chi-squared and Rényi divergence

MA Erdogdu, R Hosseinzadeh… - … Conference on Artificial …, 2022 - proceedings.mlr.press
We study sampling from a target distribution $\nu_*= e^{-f} $ using the unadjusted Langevin
Monte Carlo (LMC) algorithm when the potential $ f $ satisfies a strong dissipativity condition …

Lower bounds on Metropolized sampling methods for well-conditioned distributions

YT Lee, R Shen, K Tian - Advances in Neural Information …, 2021 - proceedings.neurips.cc
We give lower bounds on the performance of two of the most popular sampling methods in
practice, the Metropolis-adjusted Langevin algorithm (MALA) and multi-step Hamiltonian …

Query lower bounds for log-concave sampling

S Chewi, J de Dios Pont, J Li, C Lu, S Narayanan - Journal of the ACM, 2024 - dl.acm.org
Log-concave sampling has witnessed remarkable algorithmic advances in recent years, but
the corresponding problem of proving lower bounds for this task has remained elusive, with …

Nonlinear Hamiltonian Monte Carlo & its particle approximation

N Bou-Rabee, K Schuh - arxiv preprint arxiv:2308.11491, 2023 - arxiv.org
We present a nonlinear (in the sense of McKean) generalization of Hamiltonian Monte Carlo
(HMC) termed nonlinear HMC (nHMC) capable of sampling from nonlinear probability …