On information gain and regret bounds in gaussian process bandits
Consider the sequential optimization of an expensive to evaluate and possibly non-convex
objective function $ f $ from noisy feedback, that can be considered as a continuum-armed …
objective function $ f $ from noisy feedback, that can be considered as a continuum-armed …
Adversarially robust optimization with Gaussian processes
In this paper, we consider the problem of Gaussian process (GP) optimization with an added
robustness requirement: The returned point may be perturbed by an adversary, and we …
robustness requirement: The returned point may be perturbed by an adversary, and we …
Misspecified gaussian process bandit optimization
We consider the problem of optimizing a black-box function based on noisy bandit feedback.
Kernelized bandit algorithms have shown strong empirical and theoretical performance for …
Kernelized bandit algorithms have shown strong empirical and theoretical performance for …
Efficient model-based reinforcement learning through optimistic policy search and planning
Abstract Model-based reinforcement learning algorithms with probabilistic dynamical
models are amongst the most data-efficient learning methods. This is often attributed to their …
models are amongst the most data-efficient learning methods. This is often attributed to their …
Quantum bayesian optimization
Kernelized bandits, also known as Bayesian optimization (BO), has been a prevalent
method for optimizing complicated black-box reward functions. Various BO algorithms have …
method for optimizing complicated black-box reward functions. Various BO algorithms have …
On the sublinear regret of GP-UCB
In the kernelized bandit problem, a learner aims to sequentially compute the optimum of a
function lying in a reproducing kernel Hilbert space given only noisy evaluations at …
function lying in a reproducing kernel Hilbert space given only noisy evaluations at …
Gaussian process bandit optimization with few batches
In this paper, we consider the problem of black-box optimization using Gaussian Process
(GP) bandit optimization with a small number of batches. Assuming the unknown function …
(GP) bandit optimization with a small number of batches. Assuming the unknown function …
Stochastic zeroth-order optimization in high dimensions
We consider the problem of optimizing a high-dimensional convex function using stochastic
zeroth-order queries. Under sparsity assumptions on the gradients or function values, we …
zeroth-order queries. Under sparsity assumptions on the gradients or function values, we …
Optimal order simple regret for Gaussian process bandits
Consider the sequential optimization of a continuous, possibly non-convex, and expensive
to evaluate objective function $ f $. The problem can be cast as a Gaussian Process (GP) …
to evaluate objective function $ f $. The problem can be cast as a Gaussian Process (GP) …
Gaussian process optimization with adaptive sketching: Scalable and no regret
Gaussian processes (GP) are a stochastic processes, used as Bayesian approach for the
optimization of black-box functions. Despite their effectiveness in simple problems, GP …
optimization of black-box functions. Despite their effectiveness in simple problems, GP …