A framework of constraint preserving update schemes for optimization on Stiefel manifold

B Jiang, YH Dai - Mathematical Programming, 2015 - Springer
This paper considers optimization problems on the Stiefel manifold X^ TX= I_p XTX= I p,
where X ∈ R^ n * p X∈ R n× p is the variable and I_p I p is the p p-by-p p identity matrix. A …

A family of spectral gradient methods for optimization

YH Dai, Y Huang, XW Liu - Computational Optimization and Applications, 2019 - Springer
We propose a family of spectral gradient methods, whose stepsize is determined by a
convex combination of the long Barzilai–Borwein (BB) stepsize and the short BB stepsize …

On the acceleration of the Barzilai–Borwein method

Y Huang, YH Dai, XW Liu, H Zhang - Computational Optimization and …, 2022 - Springer
Abstract The Barzilai–Borwein (BB) gradient method is efficient for solving large-scale
unconstrained problems to modest accuracy due to its ingenious stepsize which generally …

Gradient methods exploiting spectral properties

Y Huang, YH Dai, XW Liu, H Zhang - Optimization Methods and …, 2020 - Taylor & Francis
We propose a new stepsize for the gradient method. It is shown that this new stepsize will
converge to the reciprocal of the largest eigenvalue of the Hessian, when Dai-Yang's …

Smoothing projected Barzilai–Borwein method for constrained non-Lipschitz optimization

Y Huang, H Liu - Computational Optimization and Applications, 2016 - Springer
We present a smoothing projected Barzilai–Borwein (SPBB) algorithm for solving a class of
minimization problems on a closed convex set, where the objective function is nonsmooth …

On the asymptotic convergence and acceleration of gradient methods

Y Huang, YH Dai, XW Liu, H Zhang - Journal of Scientific Computing, 2022 - Springer
We consider the asymptotic behavior of a family of gradient methods, which include the
steepest descent and minimal gradient methods as special instances. It is proved that each …

A Note on R-Linear Convergence of Nonmonotone Gradient Methods

XR Li, YK Huang - Journal of the Operations Research Society of China, 2023 - Springer
Nonmonotone gradient methods generally perform better than their monotone counterparts
especially on unconstrained quadratic optimization. However, the known convergence rate …