Entropic neural optimal transport via diffusion processes

N Gushchin, A Kolesov, A Korotin… - Advances in …, 2023 - proceedings.neurips.cc
We propose a novel neural algorithm for the fundamental problem of computing the entropic
optimal transport (EOT) plan between probability distributions which are accessible by …

Unifying gans and score-based diffusion as generative particle models

JY Franceschi, M Gartrell… - Advances in …, 2024 - proceedings.neurips.cc
Particle-based deep generative models, such as gradient flows and score-based diffusion
models, have recently gained traction thanks to their striking performance. Their principle of …

Normalizing flow neural networks by JKO scheme

C Xu, X Cheng, Y **e - Advances in Neural Information …, 2023 - proceedings.neurips.cc
Normalizing flow is a class of deep generative models for efficient sampling and likelihood
estimation, which achieves attractive performance, particularly in high dimensions. The flow …

Optimizing functionals on the space of probabilities with input convex neural networks

D Alvarez-Melis, Y Schiff, Y Mroueh - arxiv preprint arxiv:2106.00774, 2021 - arxiv.org
Gradient flows are a powerful tool for optimizing functionals in general metric spaces,
including the space of probabilities endowed with the Wasserstein metric. A typical …

Improved dimension dependence of a proximal algorithm for sampling

J Fan, B Yuan, Y Chen - The Thirty Sixth Annual Conference …, 2023 - proceedings.mlr.press
We propose a sampling algorithm that achieves superior complexity bounds in all the
classical settings (strongly log-concave, log-concave, Logarithmic-Sobolev inequality (LSI) …

Neural optimal transport with general cost functionals

A Asadulaev, A Korotin, V Egiazarian, P Mokrov… - arxiv preprint arxiv …, 2022 - arxiv.org
Neural optimal transport techniques mostly use Euclidean cost functions, such as $\ell^ 1$
or $\ell^ 2$. These costs are suitable for translation tasks between related domains, but they …

Posterior sampling based on gradient flows of the MMD with negative distance kernel

P Hagemann, J Hertrich, F Altekrüger, R Beinert… - arxiv preprint arxiv …, 2023 - arxiv.org
We propose conditional flows of the maximum mean discrepancy (MMD) with the negative
distance kernel for posterior sampling and conditional generative modeling. This MMD …

Particle-based variational inference with generalized wasserstein gradient flow

Z Cheng, S Zhang, L Yu… - Advances in Neural …, 2023 - proceedings.neurips.cc
Particle-based variational inference methods (ParVIs) such as Stein variational gradient
descent (SVGD) update the particles based on the kernelized Wasserstein gradient flow for …

Neural Wasserstein gradient flows for maximum mean discrepancies with Riesz kernels

F Altekrüger, J Hertrich, G Steidl - arxiv preprint arxiv:2301.11624, 2023 - arxiv.org
Wasserstein gradient flows of maximum mean discrepancy (MMD) functionals with non-
smooth Riesz kernels show a rich structure as singular measures can become absolutely …

Self-consistent velocity matching of probability flows

L Li, S Hurault, JM Solomon - Advances in Neural …, 2023 - proceedings.neurips.cc
We present a discretization-free scalable framework for solving a large class of mass-
conserving partial differential equations (PDEs), including the time-dependent Fokker …