Dynamical variational autoencoders: A comprehensive review

L Girin, S Leglaive, X Bie, J Diard, T Hueber… - arxiv preprint arxiv …, 2020 - arxiv.org
Variational autoencoders (VAEs) are powerful deep generative models widely used to
represent high-dimensional complex data through a low-dimensional latent space learned …

The free energy principle for perception and action: A deep learning perspective

P Mazzaglia, T Verbelen, O Catal, B Dhoedt - Entropy, 2022 - mdpi.com
The free energy principle, and its corollary active inference, constitute a bio-inspired theory
that assumes biological agents act to remain in a restricted set of preferred states of the …

Score-based generative modeling in latent space

A Vahdat, K Kreis, J Kautz - Advances in neural information …, 2021 - proceedings.neurips.cc
Score-based generative models (SGMs) have recently demonstrated impressive results in
terms of both sample quality and distribution coverage. However, they are usually applied …

NVAE: A deep hierarchical variational autoencoder

A Vahdat, J Kautz - Advances in neural information …, 2020 - proceedings.neurips.cc
Normalizing flows, autoregressive models, variational autoencoders (VAEs), and deep
energy-based models are among competing likelihood-based frameworks for deep …

Maximum likelihood training of score-based diffusion models

Y Song, C Durkan, I Murray… - Advances in neural …, 2021 - proceedings.neurips.cc
Score-based diffusion models synthesize samples by reversing a stochastic process that
diffuses data to noise, and are trained by minimizing a weighted combination of score …

Step-unrolled denoising autoencoders for text generation

N Savinov, J Chung, M Binkowski, E Elsen… - arxiv preprint arxiv …, 2021 - arxiv.org
In this paper we propose a new generative model of text, Step-unrolled Denoising
Autoencoder (SUNDAE), that does not rely on autoregressive models. Similarly to denoising …

Don't blame the elbo! a linear vae perspective on posterior collapse

J Lucas, G Tucker, RB Grosse… - Advances in Neural …, 2019 - proceedings.neurips.cc
Abstract Posterior collapse in Variational Autoencoders (VAEs) with uninformative priors
arises when the variational posterior distribution closely matches the prior for a subset of …

Maximum likelihood training of implicit nonlinear diffusion model

D Kim, B Na, SJ Kwon, D Lee… - Advances in neural …, 2022 - proceedings.neurips.cc
Whereas diverse variations of diffusion models exist, extending the linear diffusion into a
nonlinear diffusion process is investigated by very few works. The nonlinearity effect has …

Learning energy-based prior model with diffusion-amortized mcmc

P Yu, Y Zhu, S **e, XS Ma, R Gao… - Advances in Neural …, 2023 - proceedings.neurips.cc
Latent space EBMs, also known as energy-based priors, have drawn growing interests in
the field of generative modeling due to its flexibility in the formulation and strong modeling …

Bilateral variational autoencoder for collaborative filtering

QT Truong, A Salah, HW Lauw - … conference on web search and data …, 2021 - dl.acm.org
Preference data is a form of dyadic data, with measurements associated with pairs of
elements arising from two discrete sets of objects. These are users and items, as well as …