Unexpected improvements to expected improvement for bayesian optimization
Expected Improvement (EI) is arguably the most popular acquisition function in Bayesian
optimization and has found countless successful applications, but its performance is often …
optimization and has found countless successful applications, but its performance is often …
Neural architecture search: Insights from 1000 papers
In the past decade, advances in deep learning have resulted in breakthroughs in a variety of
areas, including computer vision, natural language understanding, speech recognition, and …
areas, including computer vision, natural language understanding, speech recognition, and …
Joint entropy search for multi-objective bayesian optimization
Many real-world problems can be phrased as a multi-objective optimization problem, where
the goal is to identify the best set of compromises between the competing objectives. Multi …
the goal is to identify the best set of compromises between the competing objectives. Multi …
[HTML][HTML] Combining multi-fidelity modelling and asynchronous batch Bayesian Optimization
Bayesian Optimization is a useful tool for experiment design. Unfortunately, the classical,
sequential setting of Bayesian Optimization does not translate well into laboratory …
sequential setting of Bayesian Optimization does not translate well into laboratory …
Randomized Gaussian process upper confidence bound with tighter Bayesian regret bounds
Gaussian process upper confidence bound (GP-UCB) is a theoretically promising approach
for black-box optimization; however, the confidence parameter $\beta $ is considerably large …
for black-box optimization; however, the confidence parameter $\beta $ is considerably large …
Self-correcting bayesian optimization through bayesian active learning
Gaussian processes are the model of choice in Bayesian optimization and active learning.
Yet, they are highly dependent on cleverly chosen hyperparameters to reach their full …
Yet, they are highly dependent on cleverly chosen hyperparameters to reach their full …
Optimizing Posterior Samples for Bayesian Optimization via Rootfinding
Bayesian optimization devolves the global optimization of a costly objective function to the
global optimization of a sequence of acquisition functions. This inner-loop optimization can …
global optimization of a sequence of acquisition functions. This inner-loop optimization can …
SOBER: Highly parallel Bayesian optimization and Bayesian quadrature over discrete and mixed spaces
Batch Bayesian optimisation and Bayesian quadrature have been shown to be sample-
efficient methods of performing optimisation and quadrature where expensive-to-evaluate …
efficient methods of performing optimisation and quadrature where expensive-to-evaluate …
Transition Constrained Bayesian Optimization via Markov Decision Processes
Bayesian optimization is a methodology to optimize black-box functions. Traditionally, it
focuses on the setting where you can arbitrarily query the search space. However, many real …
focuses on the setting where you can arbitrarily query the search space. However, many real …
Domain-agnostic batch Bayesian optimization with diverse constraints via Bayesian quadrature
Real-world optimisation problems often feature complex combinations of (1) diverse
constraints,(2) discrete and mixed spaces, and are (3) highly parallelisable.(4) There are …
constraints,(2) discrete and mixed spaces, and are (3) highly parallelisable.(4) There are …