Derivative-free optimization methods

J Larson, M Menickelly, SM Wild - Acta Numerica, 2019 - cambridge.org
In many optimization problems arising from scientific, engineering and artificial intelligence
applications, objective and constraint functions are available only as the output of a black …

Zeroth-order nonconvex stochastic optimization: Handling constraints, high dimensionality, and saddle points

K Balasubramanian, S Ghadimi - Foundations of Computational …, 2022 - Springer
In this paper, we propose and analyze zeroth-order stochastic approximation algorithms for
nonconvex and convex optimization, with a focus on addressing constrained optimization …

The power of first-order smooth optimization for black-box non-smooth problems

A Gasnikov, A Novitskii, V Novitskii… - arxiv preprint arxiv …, 2022 - arxiv.org
Gradient-free/zeroth-order methods for black-box convex optimization have been
extensively studied in the last decade with the main focus on oracle calls complexity. In this …

A gradient estimator via L1-randomization for online zero-order optimization with two point feedback

A Akhavan, E Chzhen, M Pontil… - Advances in Neural …, 2022 - proceedings.neurips.cc
This work studies online zero-order optimization of convex and Lipschitz functions. We
present a novel gradient estimator based on two function evaluations and randomization on …

Randomized gradient-free methods in convex optimization

A Gasnikov, D Dvinskikh, P Dvurechensky… - Encyclopedia of …, 2023 - Springer
Consider a convex optimization problem min x∈ Q⊆ Rd f (x)(1) with convex feasible set Q
and convex objective f possessing the zeroth-order (gradient/derivativefree) oracle [83]. The …

Online nonconvex optimization with limited instantaneous oracle feedback

Z Guan, Y Zhou, Y Liang - The Thirty Sixth Annual …, 2023 - proceedings.mlr.press
We investigate online nonconvex optimization from a local regret minimization perspective.
Previous studies along this line implicitly required the access to sufficient gradient oracles at …

Improve single-point zeroth-order optimization using high-pass and low-pass filters

X Chen, Y Tang, N Li - International Conference on Machine …, 2022 - proceedings.mlr.press
Single-point zeroth-order optimization (SZO) is useful in solving online black-box
optimization and control problems in time-varying environments, as it queries the function …

Gradient-free methods with inexact oracle for convex-concave stochastic saddle-point problem

A Beznosikov, A Sadiev, A Gasnikov - International Conference on …, 2020 - Springer
In the paper, we generalize the approach Gasnikov et al. 2017, which allows to solve
(stochastic) convex optimization problems with an inexact gradient-free oracle, to the convex …

Non-smooth setting of stochastic decentralized convex optimization problem over time-varying graphs

A Lobanov, A Veprikov, G Konin, A Beznosikov… - Computational …, 2023 - Springer
Distributed optimization has a rich history. It has demonstrated its effectiveness in many
machine learning applications, etc. In this paper we study a subclass of distributed …

A new one-point residual-feedback oracle for black-box learning and control

Y Zhang, Y Zhou, K Ji, MM Zavlanos - Automatica, 2022 - Elsevier
Zeroth-order optimization (ZO) algorithms have been recently used to solve black-box or
simulation-based learning and control problems, where the gradient of the objective function …