Hyperparameter optimization: Foundations, algorithms, best practices, and open challenges

B Bischl, M Binder, M Lang, T Pielok… - … : Data Mining and …, 2023 - Wiley Online Library
Most machine learning algorithms are configured by a set of hyperparameters whose values
must be carefully chosen and which often considerably impact performance. To avoid a time …

Tree-structured parzen estimator: Understanding its algorithm components and their roles for better empirical performance

S Watanabe - arxiv preprint arxiv:2304.11127, 2023 - arxiv.org
Recent advances in many domains require more and more complicated experiment design.
Such complicated experiments often have many parameters, which necessitate parameter …

Priorband: Practical hyperparameter optimization in the age of deep learning

N Mallik, E Bergman, C Hvarfner… - Advances in …, 2023 - proceedings.neurips.cc
Abstract Hyperparameters of Deep Learning (DL) pipelines are crucial for their downstream
performance. While a large number of methods for Hyperparameter Optimization (HPO) …

Pfns4bo: In-context learning for bayesian optimization

S Müller, M Feurer, N Hollmann… - … on Machine Learning, 2023 - proceedings.mlr.press
In this paper, we use Prior-data Fitted Networks (PFNs) as a flexible surrogate for Bayesian
Optimization (BO). PFNs are neural processes that are trained to approximate the posterior …

Increasing the scope as you learn: Adaptive Bayesian optimization in nested subspaces

L Papenmeier, L Nardi… - Advances in Neural …, 2022 - proceedings.neurips.cc
Recent advances have extended the scope of Bayesian optimization (BO) to expensive-to-
evaluate black-box functions with dozens of dimensions, aspiring to unlock impactful …

Jahs-bench-201: A foundation for research on joint architecture and hyperparameter search

A Bansal, D Stoll, M Janowski… - Advances in Neural …, 2022 - proceedings.neurips.cc
The past few years have seen the development of many benchmarks for Neural Architecture
Search (NAS), fueling rapid progress in NAS research. However, recent work, which shows …

Automl in the age of large language models: Current challenges, future opportunities and risks

A Tornede, D Deng, T Eimer, J Giovanelli… - arxiv preprint arxiv …, 2023 - arxiv.org
The fields of both Natural Language Processing (NLP) and Automated Machine Learning
(AutoML) have achieved remarkable results over the past years. In NLP, especially Large …

Joint entropy search for maximally-informed Bayesian optimization

C Hvarfner, F Hutter, L Nardi - Advances in Neural …, 2022 - proceedings.neurips.cc
Abstract Information-theoretic Bayesian optimization techniques have become popular for
optimizing expensive-to-evaluate black-box functions due to their non-myopic qualities …

Efficient Hyperparameter Optimization with Adaptive Fidelity Identification

J Jiang, Z Wen, A Mansoor… - Proceedings of the IEEE …, 2024 - openaccess.thecvf.com
Abstract Hyperparameter Optimization and Neural Architecture Search are powerful in
attaining state-of-the-art machine learning models with Bayesian Optimization (BO) standing …

Automated dynamic algorithm configuration

S Adriaensen, A Biedenkapp, G Shala, N Awad… - Journal of Artificial …, 2022 - jair.org
The performance of an algorithm often critically depends on its parameter configuration.
While a variety of automated algorithm configuration methods have been proposed to …