Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Linear convergence of gradient and proximal-gradient methods under the polyak-łojasiewicz condition
In 1963, Polyak proposed a simple condition that is sufficient to show a global linear
convergence rate for gradient descent. This condition is a special case of the Łojasiewicz …
convergence rate for gradient descent. This condition is a special case of the Łojasiewicz …
On the linear convergence of the alternating direction method of multipliers
We analyze the convergence rate of the alternating direction method of multipliers (ADMM)
for minimizing the sum of two or more nonsmooth convex separable functions subject to …
for minimizing the sum of two or more nonsmooth convex separable functions subject to …
Successive convex approximation: Analysis and applications
M Razaviyayn - 2014 - search.proquest.com
The block coordinate descent (BCD) method is widely used for minimizing a continuous
function f of several block variables. At each iteration of this method, a single block of …
function f of several block variables. At each iteration of this method, a single block of …
A unified approach to error bounds for structured convex optimization problems
Error bounds, which refer to inequalities that bound the distance of vectors in a test set to a
given set by a residual function, have proven to be extremely useful in analyzing the …
given set by a residual function, have proven to be extremely useful in analyzing the …
Iteration complexity analysis of block coordinate descent methods
In this paper, we provide a unified iteration complexity analysis for a family of general block
coordinate descent methods, covering popular methods such as the block coordinate …
coordinate descent methods, covering popular methods such as the block coordinate …
An efficient Hessian based algorithm for solving large-scale sparse group Lasso problems
The sparse group Lasso is a widely used statistical model which encourages the sparsity
both on a group and within the group level. In this paper, we develop an efficient augmented …
both on a group and within the group level. In this paper, we develop an efficient augmented …
Stochastic second-order methods improve best-known sample complexity of SGD for gradient-dominated functions
We study the performance of Stochastic Cubic Regularized Newton (SCRN) on a class of
functions satisfying gradient dominance property with $1\le\alpha\le2 $ which holds in a …
functions satisfying gradient dominance property with $1\le\alpha\le2 $ which holds in a …
A block successive upper-bound minimization method of multipliers for linearly constrained convex optimization
Consider the problem of minimizing the sum of a smooth convex function and a separable
nonsmooth convex function subject to linear coupling constraints. Problems of this form arise …
nonsmooth convex function subject to linear coupling constraints. Problems of this form arise …
A family of inexact SQA methods for non-smooth convex minimization with provable convergence guarantees based on the Luo–Tseng error bound property
We propose a new family of inexact sequential quadratic approximation (SQA) methods,
which we call the inexact regularized proximal Newton (IRPN) method, for minimizing the …
which we call the inexact regularized proximal Newton (IRPN) method, for minimizing the …
On the linear convergence of the proximal gradient method for trace norm regularization
Motivated by various applications in machine learning, the problem of minimizing a convex
smooth loss function with trace norm regularization has received much attention lately …
smooth loss function with trace norm regularization has received much attention lately …