Lower bounds and optimal algorithms for personalized federated learning F Hanzely, S Hanzely, S Horváth, P Richtárik
Advances in Neural Information Processing Systems 33, 2304-2315, 2020
202 2020 ZeroSARAH: Efficient nonconvex finite-sum optimization with zero full gradient computation Z Li, S Hanzely, P Richtárik
arXiv preprint arXiv:2103.01447, 2021
31 2021 A Damped Newton Method Achieves Global and Local Quadratic Convergence Rate S Hanzely, D Kamzolov, D Pasechnyuk, A Gasnikov, P Richtarik, M Takac
Advances in Neural Information Processing Systems 35, 25320-25334, 2022
20 2022 Distributed Newton-type methods with communication compression and Bernoulli aggregation R Islamov, X Qian, S Hanzely, M Safaryan, P Richtárik
arXiv preprint arXiv:2206.03588, 2022
12 2022 Adaptive learning of the optimal mini-batch size of SGD M Alfarra, S Hanzely, A Albasyoni, B Ghanem, P Richtárik
Workshop on Optimization for Machine Learning, NeurIPS 2020, 2020
11 * 2020 Convergence of First-Order Algorithms for Meta-Learning with Moreau Envelopes K Mishchenko, S Hanzely, P Richtárik
arXiv preprint arXiv:2301.06806, 2023
5 2023 Adaptive Optimization Algorithms for Machine Learning S Hanzely
arXiv preprint arXiv:2311.10203, 2023
3 2023 Sketch-and-Project Meets Newton Method: Global Convergence with Low-Rank Updates S Hanzely
arXiv preprint arXiv:2305.13082, 2023
3 * 2023 DAG: Projected Stochastic Approximation Iteration for DAG Structure LearningK Ziu, S Hanzely, L Li, K Zhang, M Takáč, D Kamzolov
arXiv preprint arXiv:2410.23862, 2024
1 2024 Newton Method Revisited: Global Convergence Rates up to for Stepsize Schedules and Linesearch Procedures S Hanzely, F Abdukhakimov, M Takáč
arXiv preprint arXiv:2405.18926, 2024
2024