A unified analysis of descent sequences in weakly convex optimization, including convergence rates for bundle methods

F Atenas, C Sagastizábal, PJS Silva, M Solodov - SIAM Journal on …, 2023 - SIAM
We present a framework for analyzing convergence and local rates of convergence of a
class of descent algorithms, assuming the objective function is weakly convex. The …

Decentralized inexact proximal gradient method with network-independent stepsizes for convex composite optimization

L Guo, X Shi, J Cao, Z Wang - IEEE Transactions on Signal …, 2023 - ieeexplore.ieee.org
This paper proposes a novel CTA (Combine-Then-Adapt)-based decentralized algorithm for
solving convex composite optimization problems over undirected and connected networks …

Error bounds, PL condition, and quadratic growth for weakly convex functions, and linear convergences of proximal point methods

FY Liao, L Ding, Y Zheng - 6th Annual Learning for Dynamics …, 2024 - proceedings.mlr.press
Many machine learning problems lack strong convexity properties. Fortunately, recent
studies have revealed that first-order algorithms also enjoy linear convergences under …

A globally convergent proximal Newton-type method in nonsmooth convex optimization

BS Mordukhovich, X Yuan, S Zeng, J Zhang - Mathematical Programming, 2023 - Springer
The paper proposes and justifies a new algorithm of the proximal Newton type to solve a
broad class of nonsmooth composite convex optimization problems without strong convexity …

DISA: A dual inexact splitting algorithm for distributed convex composite optimization

L Guo, X Shi, S Yang, J Cao - IEEE Transactions on Automatic …, 2023 - ieeexplore.ieee.org
In this article, we propose a novel dual inexact splitting algorithm (DISA) for distributed
convex composite optimization problems, where the local loss function consists of a smooth …

Scaled relative graphs: Nonexpansive operators via 2D Euclidean geometry

EK Ryu, R Hannah, W Yin - Mathematical Programming, 2022 - Springer
Many iterative methods in applied mathematics can be thought of as fixed-point iterations,
and such algorithms are usually analyzed analytically, with inequalities. In this paper, we …

Discerning the linear convergence of ADMM for structured convex optimization through the lens of variational analysis

X Yuan, S Zeng, J Zhang - Journal of Machine Learning Research, 2020 - jmlr.org
Despite the rich literature, the linear convergence of alternating direction method of
multipliers (ADMM) has not been fully understood even for the convex case. For example …

A fast stochastic approximation-based subgradient extragradient algorithm with variance reduction for solving stochastic variational inequality problems

XJ Long, YH He - Journal of Computational and Applied Mathematics, 2023 - Elsevier
In this paper, we propose a fast stochastic approximation-based subgradient extragradient
algorithm with variance reduction for solving the stochastic variational inequality, where the …

Variance-based subgradient extragradient method for stochastic variational inequality problems

ZP Yang, J Zhang, Y Wang, GH Lin - Journal of Scientific Computing, 2021 - Springer
In this paper, we propose a variance-based subgradient extragradient algorithm with line
search for stochastic variational inequality problems by aiming at robustness with respect to …

Exponential convergence of primal–dual dynamics under general conditions and its application to distributed optimization

L Guo, X Shi, J Cao, Z Wang - IEEE Transactions on Neural …, 2022 - ieeexplore.ieee.org
In this article, we establish the local and global exponential convergence of a primal–dual
dynamics (PDD) for solving equality-constrained optimization problems without strong …