A survey of recent advances in optimization methods for wireless communications
Mathematical optimization is now widely regarded as an indispensable modeling and
solution tool for the design of wireless communications systems. While optimization has …
solution tool for the design of wireless communications systems. While optimization has …
Linear convergence of gradient and proximal-gradient methods under the polyak-łojasiewicz condition
In 1963, Polyak proposed a simple condition that is sufficient to show a global linear
convergence rate for gradient descent. This condition is a special case of the Łojasiewicz …
convergence rate for gradient descent. This condition is a special case of the Łojasiewicz …
Acceleration methods
This monograph covers some recent advances in a range of acceleration techniques
frequently used in convex optimization. We first use quadratic optimization problems to …
frequently used in convex optimization. We first use quadratic optimization problems to …
Error bounds, quadratic growth, and linear convergence of proximal methods
D Drusvyatskiy, AS Lewis - Mathematics of Operations …, 2018 - pubsonline.informs.org
The proximal gradient algorithm for minimizing the sum of a smooth and nonsmooth convex
function often converges linearly even without strong convexity. One common reason is that …
function often converges linearly even without strong convexity. One common reason is that …
Linear convergence of first order methods for non-strongly convex optimization
The standard assumption for proving linear convergence of first order methods for smooth
convex optimization is the strong convexity of the objective function, an assumption which …
convex optimization is the strong convexity of the objective function, an assumption which …
Calculus of the exponent of Kurdyka–Łojasiewicz inequality and its applications to linear convergence of first-order methods
In this paper, we study the Kurdyka–Łojasiewicz (KL) exponent, an important quantity for
analyzing the convergence rate of first-order methods. Specifically, we develop various …
analyzing the convergence rate of first-order methods. Specifically, we develop various …
A highly efficient semismooth Newton augmented Lagrangian method for solving Lasso problems
We develop a fast and robust algorithm for solving large-scale convex composite
optimization models with an emphasis on the \ell_1-regularized least squares regression …
optimization models with an emphasis on the \ell_1-regularized least squares regression …
Linear convergence of proximal gradient algorithm with extrapolation for a class of nonconvex nonsmooth minimization problems
In this paper, we study the proximal gradient algorithm with extrapolation for minimizing the
sum of a Lipschitz differentiable function and a proper closed convex function. Under the …
sum of a Lipschitz differentiable function and a proper closed convex function. Under the …
Rsg: Beating subgradient method without smoothness and strong convexity
In this paper, we study the efficiency of a Restarted SubGradient (RSG) method that
periodically restarts the standard subgradient method (SG). We show that, when applied to a …
periodically restarts the standard subgradient method (SG). We show that, when applied to a …
A unified analysis of descent sequences in weakly convex optimization, including convergence rates for bundle methods
We present a framework for analyzing convergence and local rates of convergence of a
class of descent algorithms, assuming the objective function is weakly convex. The …
class of descent algorithms, assuming the objective function is weakly convex. The …