Entropic neural optimal transport via diffusion processes
We propose a novel neural algorithm for the fundamental problem of computing the entropic
optimal transport (EOT) plan between probability distributions which are accessible by …
optimal transport (EOT) plan between probability distributions which are accessible by …
Unifying gans and score-based diffusion as generative particle models
Particle-based deep generative models, such as gradient flows and score-based diffusion
models, have recently gained traction thanks to their striking performance. Their principle of …
models, have recently gained traction thanks to their striking performance. Their principle of …
Normalizing flow neural networks by JKO scheme
Normalizing flow is a class of deep generative models for efficient sampling and likelihood
estimation, which achieves attractive performance, particularly in high dimensions. The flow …
estimation, which achieves attractive performance, particularly in high dimensions. The flow …
Optimizing functionals on the space of probabilities with input convex neural networks
Gradient flows are a powerful tool for optimizing functionals in general metric spaces,
including the space of probabilities endowed with the Wasserstein metric. A typical …
including the space of probabilities endowed with the Wasserstein metric. A typical …
Improved dimension dependence of a proximal algorithm for sampling
We propose a sampling algorithm that achieves superior complexity bounds in all the
classical settings (strongly log-concave, log-concave, Logarithmic-Sobolev inequality (LSI) …
classical settings (strongly log-concave, log-concave, Logarithmic-Sobolev inequality (LSI) …
Neural optimal transport with general cost functionals
Neural optimal transport techniques mostly use Euclidean cost functions, such as $\ell^ 1$
or $\ell^ 2$. These costs are suitable for translation tasks between related domains, but they …
or $\ell^ 2$. These costs are suitable for translation tasks between related domains, but they …
Posterior sampling based on gradient flows of the MMD with negative distance kernel
We propose conditional flows of the maximum mean discrepancy (MMD) with the negative
distance kernel for posterior sampling and conditional generative modeling. This MMD …
distance kernel for posterior sampling and conditional generative modeling. This MMD …
Particle-based variational inference with generalized wasserstein gradient flow
Particle-based variational inference methods (ParVIs) such as Stein variational gradient
descent (SVGD) update the particles based on the kernelized Wasserstein gradient flow for …
descent (SVGD) update the particles based on the kernelized Wasserstein gradient flow for …
Neural Wasserstein gradient flows for maximum mean discrepancies with Riesz kernels
Wasserstein gradient flows of maximum mean discrepancy (MMD) functionals with non-
smooth Riesz kernels show a rich structure as singular measures can become absolutely …
smooth Riesz kernels show a rich structure as singular measures can become absolutely …
Self-consistent velocity matching of probability flows
We present a discretization-free scalable framework for solving a large class of mass-
conserving partial differential equations (PDEs), including the time-dependent Fokker …
conserving partial differential equations (PDEs), including the time-dependent Fokker …