Deep generative modelling: A comparative review of vaes, gans, normalizing flows, energy-based and autoregressive models

S Bond-Taylor, A Leach, Y Long… - IEEE transactions on …, 2021 - ieeexplore.ieee.org
Deep generative models are a class of techniques that train deep neural networks to model
the distribution of training samples. Research has fragmented into various interconnected …

Nerf: Neural radiance field in 3d vision, a comprehensive review

K Gao, Y Gao, H He, D Lu, L Xu, J Li - arxiv preprint arxiv:2210.00379, 2022 - arxiv.org
Neural Radiance Field (NeRF), a new novel view synthesis with implicit scene
representation has taken the field of Computer Vision by storm. As a novel view synthesis …

Voxelmorph: a learning framework for deformable medical image registration

G Balakrishnan, A Zhao, MR Sabuncu… - IEEE transactions on …, 2019 - ieeexplore.ieee.org
We present VoxelMorph, a fast learning-based framework for deformable, pairwise medical
image registration. Traditional registration methods optimize an objective function for each …

Diffusion-based generation, optimization, and planning in 3d scenes

S Huang, Z Wang, P Li, B Jia, T Liu… - Proceedings of the …, 2023 - openaccess.thecvf.com
We introduce SceneDiffuser, a conditional generative model for 3D scene understanding.
SceneDiffuser provides a unified model for solving scene-conditioned generation …

Dynamical variational autoencoders: A comprehensive review

L Girin, S Leglaive, X Bie, J Diard, T Hueber… - arxiv preprint arxiv …, 2020 - arxiv.org
Variational autoencoders (VAEs) are powerful deep generative models widely used to
represent high-dimensional complex data through a low-dimensional latent space learned …

Cyclical annealing schedule: A simple approach to mitigating kl vanishing

H Fu, C Li, X Liu, J Gao, A Celikyilmaz… - arxiv preprint arxiv …, 2019 - arxiv.org
Variational autoencoders (VAEs) with an auto-regressive decoder have been applied for
many natural language processing (NLP) tasks. The VAE objective consists of two terms,(i) …

Protein design and variant prediction using autoregressive generative models

JE Shin, AJ Riesselman, AW Kollasch… - Nature …, 2021 - nature.com
The ability to design functional sequences and predict effects of variation is central to protein
engineering and biotherapeutics. State-of-art computational methods rely on models that …

Coin: Compression with implicit neural representations

E Dupont, A Goliński, M Alizadeh, YW Teh… - arxiv preprint arxiv …, 2021 - arxiv.org
We propose a new simple approach for image compression: instead of storing the RGB
values for each pixel of an image, we store the weights of a neural network overfitted to the …

An introduction to neural data compression

Y Yang, S Mandt, L Theis - Foundations and Trends® in …, 2023 - nowpublishers.com
Neural compression is the application of neural networks and other machine learning
methods to data compression. Recent advances in statistical machine learning have opened …

Lagging inference networks and posterior collapse in variational autoencoders

J He, D Spokoyny, G Neubig… - arxiv preprint arxiv …, 2019 - arxiv.org
The variational autoencoder (VAE) is a popular combination of deep latent variable model
and accompanying variational learning technique. By using a neural inference network to …