A review of change of variable formulas for generative modeling

U Köthe - arxiv preprint arxiv:2308.02652, 2023 - arxiv.org
Change-of-variables (CoV) formulas allow to reduce complicated probability densities to
simpler ones by a learned transformation with tractable Jacobian determinant. They are thus …

Free-form flows: Make any architecture a normalizing flow

F Draxler, P Sorrenson… - International …, 2024 - proceedings.mlr.press
Normalizing Flows are generative models that directly maximize the likelihood. Previously,
the design of normalizing flows was largely constrained by the need for analytical …

A Pseudoreversible Normalizing Flow for Stochastic Dynamical Systems with Various Initial Distributions

M Yang, P Wang, D del-Castillo-Negrete, Y Cao… - SIAM Journal on …, 2024 - SIAM
We present a pseudoreversible normalizing flow method for efficiently generating samples
of the state of a stochastic differential equation (SDE) with various initial distributions. The …

Lifting architectural constraints of injective flows

P Sorrenson, F Draxler, A Rousselot… - The Twelfth …, 2024 - openreview.net
Normalizing Flows explicitly maximize a full-dimensional likelihood on the training data.
However, real data is typically only supported on a lower-dimensional manifold leading the …

Training energy-based normalizing flow with score-matching objectives

CH Chao, WF Sun, YC Hsu, Z Kira… - Advances in Neural …, 2024 - proceedings.neurips.cc
In this paper, we establish a connection between the parameterization of flow-based and
energy-based generative models, and present a new flow-based modeling approach called …

Semi-autoregressive energy flows: exploring likelihood-free training of normalizing flows

P Si, Z Chen, SS Sahoo, Y Schiff… - … on Machine Learning, 2023 - proceedings.mlr.press
Training normalizing flow generative models can be challenging due to the need to
calculate computationally expensive determinants of Jacobians. This paper studies the …

Maximum Likelihood Training of Autoencoders

P Sorrenson, F Draxler, A Rousselot… - arxiv preprint arxiv …, 2023 - arxiv.org
Maximum likelihood training has favorable statistical properties and is popular for generative
modeling, especially with normalizing flows. On the other hand, generative autoencoders …

Deep generative model based rate-distortion for image downscaling assessment

Y Liang, B Garg, P Rosin, Y Qin - Proceedings of the IEEE …, 2024 - openaccess.thecvf.com
In this paper we propose Image Downscaling Assessment by Rate-Distortion (IDA-RD) a
novel measure to quantitatively evaluate image downscaling algorithms. In contrast to image …

Exact, Tractable Gauss-Newton Optimization in Deep Reversible Architectures Reveal Poor Generalization

D Buffelli, J McGowan, W Xu, A Cioba, D Shiu… - arxiv preprint arxiv …, 2024 - arxiv.org
Second-order optimization has been shown to accelerate the training of deep neural
networks in many applications, often yielding faster progress per iteration on the training …

Maximum Entropy Reinforcement Learning via Energy-Based Normalizing Flow

CH Chao, C Feng, WF Sun, CK Lee, S See… - arxiv preprint arxiv …, 2024 - arxiv.org
Existing Maximum-Entropy (MaxEnt) Reinforcement Learning (RL) methods for continuous
action spaces are typically formulated based on actor-critic frameworks and optimized …