Dynamical variational autoencoders: A comprehensive review

L Girin, S Leglaive, X Bie, J Diard, T Hueber… - arxiv preprint arxiv …, 2020‏ - arxiv.org
Variational autoencoders (VAEs) are powerful deep generative models widely used to
represent high-dimensional complex data through a low-dimensional latent space learned …

[HTML][HTML] The survey: Text generation models in deep learning

T Iqbal, S Qureshi - Journal of King Saud University-Computer and …, 2022‏ - Elsevier
Deep learning methods possess many processing layers to understand the stratified
representation of data and have achieved state-of-art results in several domains. Recently …

Versatile diffusion: Text, images and variations all in one diffusion model

X Xu, Z Wang, G Zhang, K Wang… - Proceedings of the …, 2023‏ - openaccess.thecvf.com
Recent advances in diffusion models have set an impressive milestone in many generation
tasks, and trending works such as DALL-E2, Imagen, and Stable Diffusion have attracted …

Glm: General language model pretraining with autoregressive blank infilling

Z Du, Y Qian, X Liu, M Ding, J Qiu, Z Yang… - arxiv preprint arxiv …, 2021‏ - arxiv.org
There have been various types of pretraining architectures including autoencoding models
(eg, BERT), autoregressive models (eg, GPT), and encoder-decoder models (eg, T5) …

[معلومات الإصدار][C] An introduction to variational autoencoders

DP Kingma, M Welling - Foundations and Trends® in …, 2019‏ - nowpublishers.com
An Introduction to Variational Autoencoders Page 1 An Introduction to Variational Autoencoders
Page 2 Other titles in Foundations and Trends R in Machine Learning Computational Optimal …

Mixtext: Linguistically-informed interpolation of hidden space for semi-supervised text classification

J Chen, Z Yang, D Yang - arxiv preprint arxiv:2004.12239, 2020‏ - arxiv.org
This paper presents MixText, a semi-supervised learning method for text classification,
which uses our newly designed data augmentation method called TMix. TMix creates a …

An empirical survey of data augmentation for limited data learning in NLP

J Chen, D Tam, C Raffel, M Bansal… - Transactions of the …, 2023‏ - direct.mit.edu
NLP has achieved great progress in the past decade through the use of neural models and
large labeled datasets. The dependence on abundant data prevents NLP models from being …

Neural discrete representation learning

A Van Den Oord, O Vinyals - Advances in neural …, 2017‏ - proceedings.neurips.cc
Learning useful representations without supervision remains a key challenge in machine
learning. In this paper, we propose a simple yet powerful generative model that learns such …

Protein design and variant prediction using autoregressive generative models

JE Shin, AJ Riesselman, AW Kollasch… - Nature …, 2021‏ - nature.com
The ability to design functional sequences and predict effects of variation is central to protein
engineering and biotherapeutics. State-of-art computational methods rely on models that …

Cyclical annealing schedule: A simple approach to mitigating kl vanishing

H Fu, C Li, X Liu, J Gao, A Celikyilmaz… - arxiv preprint arxiv …, 2019‏ - arxiv.org
Variational autoencoders (VAEs) with an auto-regressive decoder have been applied for
many natural language processing (NLP) tasks. The VAE objective consists of two terms,(i) …