On information gain and regret bounds in gaussian process bandits

S Vakili, K Khezeli, V Picheny - International Conference on …, 2021 - proceedings.mlr.press
Consider the sequential optimization of an expensive to evaluate and possibly non-convex
objective function $ f $ from noisy feedback, that can be considered as a continuum-armed …

Adversarially robust optimization with Gaussian processes

I Bogunovic, J Scarlett, S Jegelka… - Advances in neural …, 2018 - proceedings.neurips.cc
In this paper, we consider the problem of Gaussian process (GP) optimization with an added
robustness requirement: The returned point may be perturbed by an adversary, and we …

Misspecified gaussian process bandit optimization

I Bogunovic, A Krause - Advances in neural information …, 2021 - proceedings.neurips.cc
We consider the problem of optimizing a black-box function based on noisy bandit feedback.
Kernelized bandit algorithms have shown strong empirical and theoretical performance for …

Efficient model-based reinforcement learning through optimistic policy search and planning

S Curi, F Berkenkamp, A Krause - Advances in Neural …, 2020 - proceedings.neurips.cc
Abstract Model-based reinforcement learning algorithms with probabilistic dynamical
models are amongst the most data-efficient learning methods. This is often attributed to their …

Quantum bayesian optimization

Z Dai, GKR Lau, A Verma, Y Shu… - Advances in Neural …, 2024 - proceedings.neurips.cc
Kernelized bandits, also known as Bayesian optimization (BO), has been a prevalent
method for optimizing complicated black-box reward functions. Various BO algorithms have …

On the sublinear regret of GP-UCB

J Whitehouse, A Ramdas… - Advances in Neural …, 2024 - proceedings.neurips.cc
In the kernelized bandit problem, a learner aims to sequentially compute the optimum of a
function lying in a reproducing kernel Hilbert space given only noisy evaluations at …

Gaussian process bandit optimization with few batches

Z Li, J Scarlett - International Conference on Artificial …, 2022 - proceedings.mlr.press
In this paper, we consider the problem of black-box optimization using Gaussian Process
(GP) bandit optimization with a small number of batches. Assuming the unknown function …

Stochastic zeroth-order optimization in high dimensions

Y Wang, S Du, S Balakrishnan… - … conference on artificial …, 2018 - proceedings.mlr.press
We consider the problem of optimizing a high-dimensional convex function using stochastic
zeroth-order queries. Under sparsity assumptions on the gradients or function values, we …

Optimal order simple regret for Gaussian process bandits

S Vakili, N Bouziani, S Jalali… - Advances in Neural …, 2021 - proceedings.neurips.cc
Consider the sequential optimization of a continuous, possibly non-convex, and expensive
to evaluate objective function $ f $. The problem can be cast as a Gaussian Process (GP) …

Gaussian process optimization with adaptive sketching: Scalable and no regret

D Calandriello, L Carratino, A Lazaric… - … on Learning Theory, 2019 - proceedings.mlr.press
Gaussian processes (GP) are a stochastic processes, used as Bayesian approach for the
optimization of black-box functions. Despite their effectiveness in simple problems, GP …