Dynamical variational autoencoders: A comprehensive review

L Girin, S Leglaive, X Bie, J Diard, T Hueber… - arxiv preprint arxiv …, 2020 - arxiv.org
Variational autoencoders (VAEs) are powerful deep generative models widely used to
represent high-dimensional complex data through a low-dimensional latent space learned …

Memory as a computational resource

I Dasgupta, SJ Gershman - Trends in cognitive sciences, 2021 - cell.com
Computer scientists have long recognized that naive implementations of algorithms often
result in a paralyzing degree of redundant computation. More sophisticated implementations …

Disentangling disentanglement in variational autoencoders

E Mathieu, T Rainforth, N Siddharth… - … on machine learning, 2019 - proceedings.mlr.press
We develop a generalisation of disentanglement in variational autoencoders (VAEs)—
decomposition of the latent representation—characterising it as the fulfilment of two factors …

Deep variational reinforcement learning for POMDPs

M Igl, L Zintgraf, TA Le, F Wood… - … on machine learning, 2018 - proceedings.mlr.press
Many real-world sequential decision making problems are partially observable by nature,
and the environment model is typically unknown. Consequently, there is great need for …

Sequential attend, infer, repeat: Generative modelling of moving objects

A Kosiorek, H Kim, YW Teh… - Advances in Neural …, 2018 - proceedings.neurips.cc
Abstract We present Sequential Attend, Infer, Repeat (SQAIR), an interpretable deep
generative model for image sequences. It can reliably discover and track objects through the …

Score-based data assimilation

F Rozet, G Louppe - Advances in Neural Information …, 2023 - proceedings.neurips.cc
Data assimilation, in its most comprehensive form, addresses the Bayesian inverse problem
of identifying plausible state trajectories that explain noisy or incomplete observations of …

An introduction to probabilistic programming

JW van de Meent, B Paige, H Yang, F Wood - arxiv preprint arxiv …, 2018 - arxiv.org
This book is a graduate-level introduction to probabilistic programming. It not only provides a
thorough background for anyone wishing to use a probabilistic programming system, but …

Tighter variational bounds are not necessarily better

T Rainforth, A Kosiorek, TA Le… - International …, 2018 - proceedings.mlr.press
We provide theoretical and empirical evidence that using tighter evidence lower bounds
(ELBOs) can be detrimental to the process of learning an inference network by reducing the …

Differentiable particle filtering via entropy-regularized optimal transport

A Corenflos, J Thornton… - International …, 2021 - proceedings.mlr.press
Particle Filtering (PF) methods are an established class of procedures for performing
inference in non-linear state-space models. Resampling is a key ingredient of PF necessary …

Sequential latent knowledge selection for knowledge-grounded dialogue

B Kim, J Ahn, G Kim - arxiv preprint arxiv:2002.07510, 2020 - arxiv.org
Knowledge-grounded dialogue is a task of generating an informative response based on
both discourse context and external knowledge. As we focus on better modeling the …