[КНИГА][B] Variational analysis and applications
BS Mordukhovich - 2018 - Springer
Boris S. Mordukhovich Page 1 Springer Monographs in Mathematics Boris S. Mordukhovich
Variational Analysis and Applications Page 2 Springer Monographs in Mathematics Editors-in-Chief …
Variational Analysis and Applications Page 2 Springer Monographs in Mathematics Editors-in-Chief …
Error bounds, quadratic growth, and linear convergence of proximal methods
D Drusvyatskiy, AS Lewis - Mathematics of Operations …, 2018 - pubsonline.informs.org
The proximal gradient algorithm for minimizing the sum of a smooth and nonsmooth convex
function often converges linearly even without strong convexity. One common reason is that …
function often converges linearly even without strong convexity. One common reason is that …
Fast convergence to non-isolated minima: four equivalent conditions for functions
Optimization algorithms can see their local convergence rates deteriorate when the Hessian
at the optimum is singular. These singularities are inescapable when the optima are non …
at the optimum is singular. These singularities are inescapable when the optima are non …
Rsg: Beating subgradient method without smoothness and strong convexity
In this paper, we study the efficiency of a Restarted SubGradient (RSG) method that
periodically restarts the standard subgradient method (SG). We show that, when applied to a …
periodically restarts the standard subgradient method (SG). We show that, when applied to a …
A unified analysis of descent sequences in weakly convex optimization, including convergence rates for bundle methods
We present a framework for analyzing convergence and local rates of convergence of a
class of descent algorithms, assuming the objective function is weakly convex. The …
class of descent algorithms, assuming the objective function is weakly convex. The …
A globally convergent proximal Newton-type method in nonsmooth convex optimization
The paper proposes and justifies a new algorithm of the proximal Newton type to solve a
broad class of nonsmooth composite convex optimization problems without strong convexity …
broad class of nonsmooth composite convex optimization problems without strong convexity …
The proximal point method revisited
D Drusvyatskiy - arxiv preprint arxiv:1712.06038, 2017 - arxiv.org
In this short survey, I revisit the role of the proximal point method in large scale optimization. I
focus on three recent examples: a proximally guided subgradient method for weakly convex …
focus on three recent examples: a proximally guided subgradient method for weakly convex …
Globally convergent coderivative-based generalized Newton methods in nonsmooth optimization
This paper proposes and justifies two globally convergent Newton-type methods to solve
unconstrained and constrained problems of nonsmooth optimization by using tools of …
unconstrained and constrained problems of nonsmooth optimization by using tools of …
Coderivative-based semi-Newton method in nonsmooth difference programming
FJ Aragón-Artacho, BS Mordukhovich… - Mathematical …, 2024 - Springer
This paper addresses the study of a new class of nonsmooth optimization problems, where
the objective is represented as a difference of two generally nonconvex functions. We …
the objective is represented as a difference of two generally nonconvex functions. We …
Nonsmooth optimization using Taylor-like models: error bounds, convergence, and termination criteria
We consider optimization algorithms that successively minimize simple Taylor-like models of
the objective function. Methods of Gauss–Newton type for minimizing the composition of a …
the objective function. Methods of Gauss–Newton type for minimizing the composition of a …