Derivative-free optimization methods
In many optimization problems arising from scientific, engineering and artificial intelligence
applications, objective and constraint functions are available only as the output of a black …
applications, objective and constraint functions are available only as the output of a black …
[LIBRO][B] Introduction: tools and challenges in derivative-free and blackbox optimization
In this introductory chapter, we present a high-level description of optimization, blackbox
optimization, and derivative-free optimization. We introduce some basic optimization …
optimization, and derivative-free optimization. We introduce some basic optimization …
Stochastic first-and zeroth-order methods for nonconvex stochastic programming
In this paper, we introduce a new stochastic approximation type algorithm, namely, the
randomized stochastic gradient (RSG) method, for solving an important class of nonlinear …
randomized stochastic gradient (RSG) method, for solving an important class of nonlinear …
Smoothing methods for nonsmooth, nonconvex minimization
X Chen - Mathematical programming, 2012 - Springer
We consider a class of smoothing methods for minimization problems where the feasible set
is convex but the objective function is not convex, not differentiable and perhaps not even …
is convex but the objective function is not convex, not differentiable and perhaps not even …
Derivative-free optimization of noisy functions via quasi-Newton methods
This paper presents a finite-difference quasi-Newton method for the minimization of noisy
functions. The method takes advantage of the scalability and power of BFGS updating, and …
functions. The method takes advantage of the scalability and power of BFGS updating, and …
Complexity analysis of interior point algorithms for non-Lipschitz and nonconvex minimization
We propose a first order interior point algorithm for a class of non-Lipschitz and nonconvex
minimization problems with box constraints, which arise from applications in variable …
minimization problems with box constraints, which arise from applications in variable …
A Riemannian Smoothing Steepest Descent Method for Non-Lipschitz Optimization on Embedded Submanifolds of
In this paper, we study the generalized subdifferentials and the Riemannian gradient
subconsistency that are the basis for non-Lipschitz optimization on embedded submanifolds …
subconsistency that are the basis for non-Lipschitz optimization on embedded submanifolds …
Optimality conditions and a smoothing trust region newton method for nonlipschitz optimization
Regularized minimization problems with nonconvex, nonsmooth, perhaps non-Lipschitz
penalty functions have attracted considerable attention in recent years, owing to their wide …
penalty functions have attracted considerable attention in recent years, owing to their wide …
Worst case complexity of direct search
LN Vicente - EURO Journal on Computational Optimization, 2013 - Springer
In this paper, we prove that the broad class of direct-search methods of directional type
based on imposing sufficient decrease to accept new iterates shares the worst case …
based on imposing sufficient decrease to accept new iterates shares the worst case …
Trust-region methods without using derivatives: worst case complexity and the nonsmooth case
Trust-region methods are a broad class of methods for continuous optimization that found
application in a variety of problems and contexts. In particular, they have been studied and …
application in a variety of problems and contexts. In particular, they have been studied and …