Derivative-free optimization methods

J Larson, M Menickelly, SM Wild - Acta Numerica, 2019 - cambridge.org
In many optimization problems arising from scientific, engineering and artificial intelligence
applications, objective and constraint functions are available only as the output of a black …

A theoretical and empirical comparison of gradient approximations in derivative-free optimization

AS Berahas, L Cao, K Choromanski… - Foundations of …, 2022 - Springer
In this paper, we analyze several methods for approximating gradients of noisy functions
using only function values. These methods include finite differences, linear interpolation …

Adaptive sampling strategies for stochastic optimization

R Bollapragada, R Byrd, J Nocedal - SIAM Journal on Optimization, 2018 - SIAM
In this paper, we propose a stochastic optimization method that adaptively controls the
sample size used in the computation of gradient approximations. Unlike other variance …

Global convergence rate analysis of a generic line search algorithm with noise

AS Berahas, L Cao, K Scheinberg - SIAM Journal on Optimization, 2021 - SIAM
In this paper, we develop convergence analysis of a modified line search method for
objective functions whose value is computed with noise and whose gradient estimates are …

Enhancing Efficiency of Safe Reinforcement Learning via Sample Manipulation

S Gu, L Shi, Y Ding, A Knoll… - Advances in …, 2025 - proceedings.neurips.cc
Safe reinforcement learning (RL) is crucial for deploying RL agents in real-world
applications, as it aims to maximize long-term rewards while satisfying safety constraints …

ASTRO-DF: A class of adaptive sampling trust-region algorithms for derivative-free stochastic optimization

S Shashaani, FS Hashemi, R Pasupathy - SIAM Journal on Optimization, 2018 - SIAM
We consider unconstrained optimization problems where only “stochastic” estimates of the
objective function are observable as replicates from a Monte Carlo oracle. The Monte Carlo …

A trust region method for noisy unconstrained optimization

S Sun, J Nocedal - Mathematical Programming, 2023 - Springer
Classical trust region methods were designed to solve problems in which function and
gradient information are exact. This paper considers the case when there are errors (or …

An introduction to multiobjective simulation optimization

SR Hunter, EA Applegate, V Arora, B Chong… - ACM Transactions on …, 2019 - dl.acm.org
The multiobjective simulation optimization (MOSO) problem is a nonlinear multiobjective
optimization problem in which multiple simultaneous and conflicting objective functions can …

Stochastic gradient line Bayesian optimization for efficient noise-robust optimization of parameterized quantum circuits

S Tamiya, H Yamasaki - npj Quantum Information, 2022 - nature.com
Optimizing parameterized quantum circuits is a key routine in using near-term quantum
devices. However, the existing algorithms for such optimization require an excessive …

Adaptive stochastic optimization: A framework for analyzing stochastic optimization algorithms

FE Curtis, K Scheinberg - IEEE Signal Processing Magazine, 2020 - ieeexplore.ieee.org
Optimization lies at the heart of machine learning (ML) and signal processing (SP).
Contemporary approaches based on the stochastic gradient (SG) method are nonadaptive …