Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Adaptive step size rules for stochastic optimization in large-scale learning
Z Yang, L Ma - Statistics and Computing, 2023 - Springer
The importance of the step size in stochastic optimization has been confirmed both
theoretically and empirically during the past few decades and reconsidered in recent years …
theoretically and empirically during the past few decades and reconsidered in recent years …
A family of spectral gradient methods for optimization
We propose a family of spectral gradient methods, whose stepsize is determined by a
convex combination of the long Barzilai–Borwein (BB) stepsize and the short BB stepsize …
convex combination of the long Barzilai–Borwein (BB) stepsize and the short BB stepsize …
Gradient methods exploiting spectral properties
We propose a new stepsize for the gradient method. It is shown that this new stepsize will
converge to the reciprocal of the largest eigenvalue of the Hessian, when Dai-Yang's …
converge to the reciprocal of the largest eigenvalue of the Hessian, when Dai-Yang's …
A convergent iterative support shrinking algorithm for non-lipschitz multi-phase image labeling model
Y Yang, Y Li, C Wu, Y Duan - Journal of Scientific Computing, 2023 - Springer
The non-Lipschitz piecewise constant Mumford–Shah model has been shown effective for
image labeling and segmentation problems, where the non-Lipschitz isotropic ℓ p (0< p< 1) …
image labeling and segmentation problems, where the non-Lipschitz isotropic ℓ p (0< p< 1) …
Stochastic variance reduced gradient methods using a trust-region-like scheme
Stochastic variance reduced gradient (SVRG) methods are important approaches to
minimize the average of a large number of cost functions frequently arising in machine …
minimize the average of a large number of cost functions frequently arising in machine …
The Barzilai–Borwein Method for distributed optimization over unbalanced directed networks
This paper studies optimization problems over multi-agent systems, in which all agents
cooperatively minimize a global objective function expressed as a sum of local cost …
cooperatively minimize a global objective function expressed as a sum of local cost …
A minibatch proximal stochastic recursive gradient algorithm using a trust-region-like scheme and Barzilai–Borwein stepsizes
We consider the problem of minimizing the sum of an average of a large number of smooth
convex component functions and a possibly nonsmooth convex function that admits a simple …
convex component functions and a possibly nonsmooth convex function that admits a simple …
On the asymptotic convergence and acceleration of gradient methods
We consider the asymptotic behavior of a family of gradient methods, which include the
steepest descent and minimal gradient methods as special instances. It is proved that each …
steepest descent and minimal gradient methods as special instances. It is proved that each …
A Note on R-Linear Convergence of Nonmonotone Gradient Methods
XR Li, YK Huang - Journal of the Operations Research Society of China, 2023 - Springer
Nonmonotone gradient methods generally perform better than their monotone counterparts
especially on unconstrained quadratic optimization. However, the known convergence rate …
especially on unconstrained quadratic optimization. However, the known convergence rate …
Nonnegative iterative reweighted method for sparse linear complementarity problem
X Hu, Q Zheng, K Zhang - Applied Numerical Mathematics, 2024 - Elsevier
Solution of sparse linear complementarity problem (LCP) has been widely discussed in
many applications. In this paper, we consider the ℓ p regularization problem with …
many applications. In this paper, we consider the ℓ p regularization problem with …