Follow
Haishan Ye
Haishan Ye
Verified email at xjtu.edu.cn
Title
Cited by
Cited by
Year
Milenas: Efficient neural architecture search via mixed-level reformulation
C He, H Ye, L Shen, T Zhang
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2020
1642020
Stochastic recursive gradient descent ascent for stochastic nonconvex-strongly-concave minimax problems
L Luo, H Ye, Z Huang, T Zhang
Advances in Neural Information Processing Systems 33, 20566-20577, 2020
1332020
Multi-consensus decentralized accelerated gradient descent
H Ye, L Luo, Z Zhou, T Zhang
Journal of Machine Learning Research 24 (306), 1-50, 2023
662023
Approximate newton methods
H Ye, L Luo, Z Zhang
Journal of Machine Learning Research 22 (66), 1-41, 2021
48*2021
Hessian-aware zeroth-order optimization for black-box adversarial attack
H Ye, Z Huang, C Fang, CJ Li, T Zhang
arXiv preprint arXiv:1812.11377, 2018
472018
Fast Fisher discriminant analysis with randomized algorithms
H Ye, Y Li, C Chen, Z Zhang
Pattern Recognition 72, 82-92, 2017
402017
Decentralized accelerated proximal gradient descent
H Ye, Z Zhou, L Luo, T Zhang
Advances in Neural Information Processing Systems 33, 18308-18317, 2020
352020
DeEPCA: Decentralized exact PCA with linear convergence rate
H Ye, T Zhang
Journal of Machine Learning Research 22 (238), 1-27, 2021
302021
Nesterov's acceleration for approximate Newton
H Ye, L Luo, Z Zhang
Journal of Machine Learning Research 21 (142), 1-37, 2020
22*2020
Towards explicit superlinear convergence rate for SR1
H Ye, D Lin, X Chang, Z Zhang
Mathematical Programming 199 (1), 1273-1303, 2023
21*2023
Explicit convergence rates of greedy and random quasi-Newton methods
D Lin, H Ye, Z Zhang
Journal of Machine Learning Research 23 (162), 1-40, 2022
202022
Greedy and random quasi-newton methods with faster explicit superlinear convergence
D Lin, H Ye, Z Zhang
Advances in Neural Information Processing Systems 34, 6646-6657, 2021
192021
PMGT-VR: A decentralized proximal-gradient algorithmic framework with variance reduction
H Ye, W Xiong, T Zhang
arXiv preprint arXiv:2012.15010, 2020
172020
Explicit superlinear convergence rates of Broyden's methods in nonlinear equations
D Lin, H Ye, Z Zhang
arXiv preprint arXiv:2109.01974, 2021
142021
Stochastic distributed optimization under average second-order similarity: Algorithms and analysis
D Lin, Y Han, H Ye, Z Zhang
Advances in Neural Information Processing Systems 36, 2024
132024
Eigencurve: Optimal learning rate schedule for sgd on quadratic objectives with skewed hessian spectrums
R Pan, H Ye, T Zhang
arXiv preprint arXiv:2110.14109, 2021
122021
Second-order fine-tuning without pain for llms: A hessian informed zeroth-order optimizer
Y Zhao, S Dang, H Ye, G Dai, Y Qian, IW Tsang
arXiv preprint arXiv:2402.15173, 2024
102024
Decentralized Riemannian conjugate gradient method on the Stiefel manifold
J Chen, H Ye, M Wang, T Huang, G Dai, IW Tsang, Y Liu
arXiv preprint arXiv:2308.10547, 2023
102023
Greedy and random Broyden's methods with explicit superlinear convergence rates in nonlinear equations
H Ye, D Lin, Z Zhang
arXiv preprint arXiv:2110.08572, 2021
102021
An optimal stochastic algorithm for decentralized nonconvex finite-sum optimization
L Luo, H Ye
arXiv preprint arXiv:2210.13931, 2022
92022
The system can't perform the operation now. Try again later.
Articles 1–20