Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
A unified analysis of descent sequences in weakly convex optimization, including convergence rates for bundle methods
We present a framework for analyzing convergence and local rates of convergence of a
class of descent algorithms, assuming the objective function is weakly convex. The …
class of descent algorithms, assuming the objective function is weakly convex. The …
Decentralized inexact proximal gradient method with network-independent stepsizes for convex composite optimization
This paper proposes a novel CTA (Combine-Then-Adapt)-based decentralized algorithm for
solving convex composite optimization problems over undirected and connected networks …
solving convex composite optimization problems over undirected and connected networks …
Error bounds, PL condition, and quadratic growth for weakly convex functions, and linear convergences of proximal point methods
Many machine learning problems lack strong convexity properties. Fortunately, recent
studies have revealed that first-order algorithms also enjoy linear convergences under …
studies have revealed that first-order algorithms also enjoy linear convergences under …
A globally convergent proximal Newton-type method in nonsmooth convex optimization
The paper proposes and justifies a new algorithm of the proximal Newton type to solve a
broad class of nonsmooth composite convex optimization problems without strong convexity …
broad class of nonsmooth composite convex optimization problems without strong convexity …
DISA: A dual inexact splitting algorithm for distributed convex composite optimization
In this article, we propose a novel dual inexact splitting algorithm (DISA) for distributed
convex composite optimization problems, where the local loss function consists of a smooth …
convex composite optimization problems, where the local loss function consists of a smooth …
Scaled relative graphs: Nonexpansive operators via 2D Euclidean geometry
Many iterative methods in applied mathematics can be thought of as fixed-point iterations,
and such algorithms are usually analyzed analytically, with inequalities. In this paper, we …
and such algorithms are usually analyzed analytically, with inequalities. In this paper, we …
Discerning the linear convergence of ADMM for structured convex optimization through the lens of variational analysis
Despite the rich literature, the linear convergence of alternating direction method of
multipliers (ADMM) has not been fully understood even for the convex case. For example …
multipliers (ADMM) has not been fully understood even for the convex case. For example …
A fast stochastic approximation-based subgradient extragradient algorithm with variance reduction for solving stochastic variational inequality problems
XJ Long, YH He - Journal of Computational and Applied Mathematics, 2023 - Elsevier
In this paper, we propose a fast stochastic approximation-based subgradient extragradient
algorithm with variance reduction for solving the stochastic variational inequality, where the …
algorithm with variance reduction for solving the stochastic variational inequality, where the …
Variance-based subgradient extragradient method for stochastic variational inequality problems
In this paper, we propose a variance-based subgradient extragradient algorithm with line
search for stochastic variational inequality problems by aiming at robustness with respect to …
search for stochastic variational inequality problems by aiming at robustness with respect to …
Exponential convergence of primal–dual dynamics under general conditions and its application to distributed optimization
In this article, we establish the local and global exponential convergence of a primal–dual
dynamics (PDD) for solving equality-constrained optimization problems without strong …
dynamics (PDD) for solving equality-constrained optimization problems without strong …