Advances in variational inference
Many modern unsupervised or semi-supervised machine learning algorithms rely on
Bayesian probabilistic models. These models are usually intractable and thus require …
Bayesian probabilistic models. These models are usually intractable and thus require …
Virtual adversarial training: a regularization method for supervised and semi-supervised learning
We propose a new regularization method based on virtual adversarial loss: a new measure
of local smoothness of the conditional label distribution given input. Virtual adversarial loss …
of local smoothness of the conditional label distribution given input. Virtual adversarial loss …
Challenges and opportunities in high dimensional variational inference
Current black-box variational inference (BBVI) methods require the user to make numerous
design choices–such as the selection of variational objective and approximating family–yet …
design choices–such as the selection of variational objective and approximating family–yet …
A brief introduction to machine learning for engineers
O Simeone - Foundations and Trends® in Signal Processing, 2018 - nowpublishers.com
This monograph aims at providing an introduction to key concepts, algorithms, and
theoretical results in machine learning. The treatment concentrates on probabilistic models …
theoretical results in machine learning. The treatment concentrates on probabilistic models …
Tighter variational bounds are not necessarily better
We provide theoretical and empirical evidence that using tighter evidence lower bounds
(ELBOs) can be detrimental to the process of learning an inference network by reducing the …
(ELBOs) can be detrimental to the process of learning an inference network by reducing the …
Importance weighting and variational inference
J Domke, DR Sheldon - Advances in neural information …, 2018 - proceedings.neurips.cc
Recent work used importance sampling ideas for better variational bounds on likelihoods.
We clarify the applicability of these ideas to pure probabilistic inference, by showing the …
We clarify the applicability of these ideas to pure probabilistic inference, by showing the …
A framework for improving the reliability of black-box variational inference
Black-box variational inference (BBVI) now sees widespread use in machine learning and
statistics as a fast yet flexible alternative to Markov chain Monte Carlo methods for …
statistics as a fast yet flexible alternative to Markov chain Monte Carlo methods for …
Markov chain score ascent: A unifying framework of variational inference with markovian gradients
Abstract Minimizing the inclusive Kullback-Leibler (KL) divergence with stochastic gradient
descent (SGD) is challenging since its gradient is defined as an integral over the posterior …
descent (SGD) is challenging since its gradient is defined as an integral over the posterior …
Wasserstein variational inference
This paper introduces Wasserstein variational inference, a new form of approximate
Bayesian inference based on optimal transport theory. Wasserstein variational inference …
Bayesian inference based on optimal transport theory. Wasserstein variational inference …
Debiasing evidence approximations: On importance-weighted autoencoders and jackknife variational inference
S Nowozin - International conference on learning representations, 2018 - openreview.net
The importance-weighted autoencoder (IWAE) approach of Burda et al. defines a sequence
of increasingly tighter bounds on the marginal likelihood of latent variable models. Recently …
of increasingly tighter bounds on the marginal likelihood of latent variable models. Recently …