Derivative-free optimization methods
In many optimization problems arising from scientific, engineering and artificial intelligence
applications, objective and constraint functions are available only as the output of a black …
applications, objective and constraint functions are available only as the output of a black …
Zeroth-order nonconvex stochastic optimization: Handling constraints, high dimensionality, and saddle points
In this paper, we propose and analyze zeroth-order stochastic approximation algorithms for
nonconvex and convex optimization, with a focus on addressing constrained optimization …
nonconvex and convex optimization, with a focus on addressing constrained optimization …
The power of first-order smooth optimization for black-box non-smooth problems
A Gasnikov, A Novitskii, V Novitskii… - arxiv preprint arxiv …, 2022 - arxiv.org
Gradient-free/zeroth-order methods for black-box convex optimization have been
extensively studied in the last decade with the main focus on oracle calls complexity. In this …
extensively studied in the last decade with the main focus on oracle calls complexity. In this …
A gradient estimator via L1-randomization for online zero-order optimization with two point feedback
This work studies online zero-order optimization of convex and Lipschitz functions. We
present a novel gradient estimator based on two function evaluations and randomization on …
present a novel gradient estimator based on two function evaluations and randomization on …
Randomized gradient-free methods in convex optimization
Consider a convex optimization problem min x∈ Q⊆ Rd f (x)(1) with convex feasible set Q
and convex objective f possessing the zeroth-order (gradient/derivativefree) oracle [83]. The …
and convex objective f possessing the zeroth-order (gradient/derivativefree) oracle [83]. The …
Online nonconvex optimization with limited instantaneous oracle feedback
We investigate online nonconvex optimization from a local regret minimization perspective.
Previous studies along this line implicitly required the access to sufficient gradient oracles at …
Previous studies along this line implicitly required the access to sufficient gradient oracles at …
Improve single-point zeroth-order optimization using high-pass and low-pass filters
Single-point zeroth-order optimization (SZO) is useful in solving online black-box
optimization and control problems in time-varying environments, as it queries the function …
optimization and control problems in time-varying environments, as it queries the function …
Gradient-free methods with inexact oracle for convex-concave stochastic saddle-point problem
In the paper, we generalize the approach Gasnikov et al. 2017, which allows to solve
(stochastic) convex optimization problems with an inexact gradient-free oracle, to the convex …
(stochastic) convex optimization problems with an inexact gradient-free oracle, to the convex …
Non-smooth setting of stochastic decentralized convex optimization problem over time-varying graphs
Distributed optimization has a rich history. It has demonstrated its effectiveness in many
machine learning applications, etc. In this paper we study a subclass of distributed …
machine learning applications, etc. In this paper we study a subclass of distributed …
A new one-point residual-feedback oracle for black-box learning and control
Zeroth-order optimization (ZO) algorithms have been recently used to solve black-box or
simulation-based learning and control problems, where the gradient of the objective function …
simulation-based learning and control problems, where the gradient of the objective function …