Rates of convergence for sparse variational Gaussian process regression
Excellent variational approximations to Gaussian process posteriors have been developed
which avoid the $\mathcal {O}\left (N^ 3\right) $ scaling with dataset size $ N $. They reduce …
which avoid the $\mathcal {O}\left (N^ 3\right) $ scaling with dataset size $ N $. They reduce …
Recursive sampling for the nystrom method
We give the first algorithm for kernel Nystrom approximation that runs in linear time in the
number of training points and is provably accurate for all kernel matrices, without …
number of training points and is provably accurate for all kernel matrices, without …
Convergence of sparse variational inference in Gaussian processes regression
Gaussian processes are distributions over functions that are versatile and mathematically
convenient priors in Bayesian modelling. However, their use is often impeded for data with …
convenient priors in Bayesian modelling. However, their use is often impeded for data with …
Query-focused video summarization: Dataset, evaluation, and a memory network based approach
Recent years have witnessed a resurgence of interest in video summarization. However,
one of the main obstacles to the research on video summarization is the user subjectivity …
one of the main obstacles to the research on video summarization is the user subjectivity …
Input sparsity time low-rank approximation via ridge leverage score sampling
We present a new algorithm for finding a near optimal low-rank approximation of a matrix A
in O (n nz (A)) time. Our method is based on a recursive sampling scheme for computing a …
in O (n nz (A)) time. Our method is based on a recursive sampling scheme for computing a …
Fractionally log-concave and sector-stable polynomials: counting planar matchings and more
We show fully polynomial time randomized approximation schemes (FPRAS) for counting
matchings of a given size, or more generally sampling/counting monomer-dimer systems in …
matchings of a given size, or more generally sampling/counting monomer-dimer systems in …
Sampling-based Nyström approximation and kernel quadrature
We analyze the Nyström approximation of a positive definite kernel associated with a
probability measure. We first prove an improved error bound for the conventional Nyström …
probability measure. We first prove an improved error bound for the conventional Nyström …
Batched gaussian process bandit optimization via determinantal point processes
Gaussian Process bandit optimization has emerged as a powerful tool for optimizing noisy
black box functions. One example in machine learning is hyper-parameter optimization …
black box functions. One example in machine learning is hyper-parameter optimization …
Grail: efficient time-series representation learning
The analysis of time series is becoming increasingly prevalent across scientific disciplines
and industrial applications. The effectiveness and the scalability of time-series mining …
and industrial applications. The effectiveness and the scalability of time-series mining …
Determinantal point processes for mini-batch diversification
We study a mini-batch diversification scheme for stochastic gradient descent (SGD). While
classical SGD relies on uniformly sampling data points to form a mini-batch, we propose a …
classical SGD relies on uniformly sampling data points to form a mini-batch, we propose a …