Adaptive step size rules for stochastic optimization in large-scale learning

Z Yang, L Ma - Statistics and Computing, 2023 - Springer
The importance of the step size in stochastic optimization has been confirmed both
theoretically and empirically during the past few decades and reconsidered in recent years …

A family of spectral gradient methods for optimization

YH Dai, Y Huang, XW Liu - Computational Optimization and Applications, 2019 - Springer
We propose a family of spectral gradient methods, whose stepsize is determined by a
convex combination of the long Barzilai–Borwein (BB) stepsize and the short BB stepsize …

Gradient methods exploiting spectral properties

Y Huang, YH Dai, XW Liu, H Zhang - Optimization Methods and …, 2020 - Taylor & Francis
We propose a new stepsize for the gradient method. It is shown that this new stepsize will
converge to the reciprocal of the largest eigenvalue of the Hessian, when Dai-Yang's …

A convergent iterative support shrinking algorithm for non-lipschitz multi-phase image labeling model

Y Yang, Y Li, C Wu, Y Duan - Journal of Scientific Computing, 2023 - Springer
The non-Lipschitz piecewise constant Mumford–Shah model has been shown effective for
image labeling and segmentation problems, where the non-Lipschitz isotropic ℓ p (0< p< 1) …

Stochastic variance reduced gradient methods using a trust-region-like scheme

T Yu, XW Liu, YH Dai, J Sun - Journal of Scientific Computing, 2021 - Springer
Stochastic variance reduced gradient (SVRG) methods are important approaches to
minimize the average of a large number of cost functions frequently arising in machine …

The Barzilai–Borwein Method for distributed optimization over unbalanced directed networks

J Hu, X Chen, L Zheng, L Zhang, H Li - Engineering Applications of …, 2021 - Elsevier
This paper studies optimization problems over multi-agent systems, in which all agents
cooperatively minimize a global objective function expressed as a sum of local cost …

A minibatch proximal stochastic recursive gradient algorithm using a trust-region-like scheme and Barzilai–Borwein stepsizes

T Yu, XW Liu, YH Dai, J Sun - IEEE Transactions on Neural …, 2020 - ieeexplore.ieee.org
We consider the problem of minimizing the sum of an average of a large number of smooth
convex component functions and a possibly nonsmooth convex function that admits a simple …

On the asymptotic convergence and acceleration of gradient methods

Y Huang, YH Dai, XW Liu, H Zhang - Journal of Scientific Computing, 2022 - Springer
We consider the asymptotic behavior of a family of gradient methods, which include the
steepest descent and minimal gradient methods as special instances. It is proved that each …

A Note on R-Linear Convergence of Nonmonotone Gradient Methods

XR Li, YK Huang - Journal of the Operations Research Society of China, 2023 - Springer
Nonmonotone gradient methods generally perform better than their monotone counterparts
especially on unconstrained quadratic optimization. However, the known convergence rate …

Nonnegative iterative reweighted method for sparse linear complementarity problem

X Hu, Q Zheng, K Zhang - Applied Numerical Mathematics, 2024 - Elsevier
Solution of sparse linear complementarity problem (LCP) has been widely discussed in
many applications. In this paper, we consider the ℓ p regularization problem with …