Derivative-free optimization methods
In many optimization problems arising from scientific, engineering and artificial intelligence
applications, objective and constraint functions are available only as the output of a black …
applications, objective and constraint functions are available only as the output of a black …
A theoretical and empirical comparison of gradient approximations in derivative-free optimization
In this paper, we analyze several methods for approximating gradients of noisy functions
using only function values. These methods include finite differences, linear interpolation …
using only function values. These methods include finite differences, linear interpolation …
Adaptive sampling strategies for stochastic optimization
In this paper, we propose a stochastic optimization method that adaptively controls the
sample size used in the computation of gradient approximations. Unlike other variance …
sample size used in the computation of gradient approximations. Unlike other variance …
Global convergence rate analysis of a generic line search algorithm with noise
In this paper, we develop convergence analysis of a modified line search method for
objective functions whose value is computed with noise and whose gradient estimates are …
objective functions whose value is computed with noise and whose gradient estimates are …
Enhancing Efficiency of Safe Reinforcement Learning via Sample Manipulation
Safe reinforcement learning (RL) is crucial for deploying RL agents in real-world
applications, as it aims to maximize long-term rewards while satisfying safety constraints …
applications, as it aims to maximize long-term rewards while satisfying safety constraints …
ASTRO-DF: A class of adaptive sampling trust-region algorithms for derivative-free stochastic optimization
We consider unconstrained optimization problems where only “stochastic” estimates of the
objective function are observable as replicates from a Monte Carlo oracle. The Monte Carlo …
objective function are observable as replicates from a Monte Carlo oracle. The Monte Carlo …
A trust region method for noisy unconstrained optimization
Classical trust region methods were designed to solve problems in which function and
gradient information are exact. This paper considers the case when there are errors (or …
gradient information are exact. This paper considers the case when there are errors (or …
An introduction to multiobjective simulation optimization
The multiobjective simulation optimization (MOSO) problem is a nonlinear multiobjective
optimization problem in which multiple simultaneous and conflicting objective functions can …
optimization problem in which multiple simultaneous and conflicting objective functions can …
Stochastic gradient line Bayesian optimization for efficient noise-robust optimization of parameterized quantum circuits
Optimizing parameterized quantum circuits is a key routine in using near-term quantum
devices. However, the existing algorithms for such optimization require an excessive …
devices. However, the existing algorithms for such optimization require an excessive …
Adaptive stochastic optimization: A framework for analyzing stochastic optimization algorithms
Optimization lies at the heart of machine learning (ML) and signal processing (SP).
Contemporary approaches based on the stochastic gradient (SG) method are nonadaptive …
Contemporary approaches based on the stochastic gradient (SG) method are nonadaptive …