Provable convergence guarantees for black-box variational inference
Black-box variational inference is widely used in situations where there is no proof that its
stochastic optimization succeeds. We suggest this is due to a theoretical gap in existing …
stochastic optimization succeeds. We suggest this is due to a theoretical gap in existing …
A framework for improving the reliability of black-box variational inference
Black-box variational inference (BBVI) now sees widespread use in machine learning and
statistics as a fast yet flexible alternative to Markov chain Monte Carlo methods for …
statistics as a fast yet flexible alternative to Markov chain Monte Carlo methods for …
Practical and matching gradient variance bounds for black-box variational Bayesian inference
Understanding the gradient variance of black-box variational inference (BBVI) is a crucial
step for establishing its convergence and develo** algorithmic improvements. However …
step for establishing its convergence and develo** algorithmic improvements. However …
Imperative learning: A self-supervised neural-symbolic learning framework for robot autonomy
Data-driven methods such as reinforcement and imitation learning have achieved
remarkable success in robot autonomy. However, their data-centric nature still hinders them …
remarkable success in robot autonomy. However, their data-centric nature still hinders them …
Linear Convergence of Black-Box Variational Inference: Should We Stick the Landing?
We prove that black-box variational inference (BBVI) with control variates, particularly the
sticking-the-landing (STL) estimator, converges at a geometric (traditionally called “linear”) …
sticking-the-landing (STL) estimator, converges at a geometric (traditionally called “linear”) …
Model-based reinforcement learning with scalable composite policy gradient estimators
In model-based reinforcement learning (MBRL), policy gradients can be estimated either by
derivative-free RL methods, such as likelihood ratio gradients (LR), or by backpropagating …
derivative-free RL methods, such as likelihood ratio gradients (LR), or by backpropagating …
Robust, automated, and accurate black-box variational inference
Black-box variational inference (BBVI) now sees widespread use in machine learning and
statistics as a fast yet flexible alternative to Markov chain Monte Carlo methods for …
statistics as a fast yet flexible alternative to Markov chain Monte Carlo methods for …
Sample average approximation for Black-Box VI
We present a novel approach for black-box VI that bypasses the difficulties of stochastic
gradient ascent, including the task of selecting step-sizes. Our approach involves using a …
gradient ascent, including the task of selecting step-sizes. Our approach involves using a …
Double control variates for gradient estimation in discrete latent variable models
Stochastic gradient-based optimisation for discrete latent variable models is challenging due
to the high variance of gradients. We introduce a variance reduction technique for score …
to the high variance of gradients. We introduce a variance reduction technique for score …
Divide and couple: Using monte carlo variational objectives for posterior approximation
J Domke, DR Sheldon - Advances in neural information …, 2019 - proceedings.neurips.cc
Recent work in variational inference (VI) has used ideas from Monte Carlo estimation to
obtain tighter lower bounds on the log-likelihood to be used as objectives for VI. However …
obtain tighter lower bounds on the log-likelihood to be used as objectives for VI. However …