Deep generative modelling: A comparative review of vaes, gans, normalizing flows, energy-based and autoregressive models

S Bond-Taylor, A Leach, Y Long… - IEEE transactions on …, 2021‏ - ieeexplore.ieee.org
Deep generative models are a class of techniques that train deep neural networks to model
the distribution of training samples. Research has fragmented into various interconnected …

Recent advances for quantum neural networks in generative learning

J Tian, X Sun, Y Du, S Zhao, Q Liu… - … on Pattern Analysis …, 2023‏ - ieeexplore.ieee.org
Quantum computers are next-generation devices that hold promise to perform calculations
beyond the reach of classical computers. A leading method towards achieving this goal is …

Geometric latent diffusion models for 3d molecule generation

M Xu, AS Powers, RO Dror, S Ermon… - International …, 2023‏ - proceedings.mlr.press
Generative models, especially diffusion models (DMs), have achieved promising results for
generating feature-rich geometries and advancing foundational science problems such as …

High-resolution image synthesis with latent diffusion models

R Rombach, A Blattmann, D Lorenz… - Proceedings of the …, 2022‏ - openaccess.thecvf.com
By decomposing the image formation process into a sequential application of denoising
autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image …

Score-based generative modeling in latent space

A Vahdat, K Kreis, J Kautz - Advances in neural information …, 2021‏ - proceedings.neurips.cc
Score-based generative models (SGMs) have recently demonstrated impressive results in
terms of both sample quality and distribution coverage. However, they are usually applied …

Taming transformers for high-resolution image synthesis

P Esser, R Rombach, B Ommer - Proceedings of the IEEE …, 2021‏ - openaccess.thecvf.com
Designed to learn long-range interactions on sequential data, transformers continue to show
state-of-the-art results on a wide variety of tasks. In contrast to CNNs, they contain no …

Very deep vaes generalize autoregressive models and can outperform them on images

R Child - arxiv preprint arxiv:2011.10650, 2020‏ - arxiv.org
We present a hierarchical VAE that, for the first time, generates samples quickly while
outperforming the PixelCNN in log-likelihood on all natural image benchmarks. We begin by …

Disentangled and controllable face image generation via 3d imitative-contrastive learning

Y Deng, J Yang, D Chen, F Wen… - Proceedings of the …, 2020‏ - openaccess.thecvf.com
We propose an approach for face image generation of virtual people with disentangled,
precisely-controllable latent representations for identity of non-existing people, expression …

Imagebart: Bidirectional context with multinomial diffusion for autoregressive image synthesis

P Esser, R Rombach, A Blattmann… - Advances in neural …, 2021‏ - proceedings.neurips.cc
Autoregressive models and their sequential factorization of the data likelihood have recently
demonstrated great potential for image representation and synthesis. Nevertheless, they …

D2c: Diffusion-decoding models for few-shot conditional generation

A Sinha, J Song, C Meng… - Advances in Neural …, 2021‏ - proceedings.neurips.cc
Conditional generative models of high-dimensional images have many applications, but
supervision signals from conditions to images can be expensive to acquire. This paper …