A zeroth-order block coordinate descent algorithm for huge-scale black-box optimization
We consider the zeroth-order optimization problem in the huge-scale setting, where the
dimension of the problem is so large that performing even basic vector operations on the …
dimension of the problem is so large that performing even basic vector operations on the …
Zeroth-order optimization meets human feedback: Provable learning via ranking oracles
In this study, we delve into an emerging optimization challenge involving a black-box
objective function that can only be gauged via a ranking oracle-a situation frequently …
objective function that can only be gauged via a ranking oracle-a situation frequently …
Prompt-tuning decision transformer with preference ranking
Prompt-tuning has emerged as a promising method for adapting pre-trained models to
downstream tasks or aligning with human preferences. Prompt learning is widely used in …
downstream tasks or aligning with human preferences. Prompt learning is widely used in …
A Hamilton–Jacobi-based proximal operator
First-order optimization algorithms are widely used today. Two standard building blocks in
these algorithms are proximal operators (proximals) and gradients. Although gradients can …
these algorithms are proximal operators (proximals) and gradients. Although gradients can …
Zeroth-order regularized optimization (zoro): Approximately sparse gradients and adaptive sampling
We consider the problem of minimizing a high-dimensional objective function, which may
include a regularization term, using only (possibly noisy) evaluations of the function. Such …
include a regularization term, using only (possibly noisy) evaluations of the function. Such …
Stochastic zeroth-order gradient and Hessian estimators: variance reduction and refined bias bounds
We study stochastic zeroth-order gradient and Hessian estimators for real-valued functions
in. We show that, via taking finite difference along random orthogonal directions, the …
in. We show that, via taking finite difference along random orthogonal directions, the …
Global solutions to nonconvex problems by evolution of Hamilton-Jacobi PDEs
Computing tasks may often be posed as optimization problems. The objective functions for
real-world scenarios are often nonconvex and/or nondifferentiable. State-of-the-art methods …
real-world scenarios are often nonconvex and/or nondifferentiable. State-of-the-art methods …
Curvature-aware derivative-free optimization
The paper discusses derivative-free optimization (DFO), which involves minimizing a
function without access to gradients or directional derivatives, only function evaluations …
function without access to gradients or directional derivatives, only function evaluations …
Stochastic zeroth order descent with structured directions
We introduce and analyze Structured Stochastic Zeroth order Descent (S-SZD), a finite
difference approach that approximates a stochastic gradient on a set of l≤ d orthogonal …
difference approach that approximates a stochastic gradient on a set of l≤ d orthogonal …
Sequential stochastic blackbox optimization with zeroth-order gradient estimators
This work considers stochastic optimization problems in which the objective function values
can only be computed by a blackbox corrupted by some random noise following an …
can only be computed by a blackbox corrupted by some random noise following an …