Follow
Grigory Malinovsky
Title
Cited by
Cited by
Year
Proxskip: Yes! local gradient steps provably lead to communication acceleration! finally!
K Mishchenko, G Malinovsky, S Stich, P Richtárik
International Conference on Machine Learning, 15750-15769, 2022
1672022
From local SGD to local fixed-point methods for federated learning
G Malinovskiy, D Kovalev, E Gasanov, L Condat, P Richtarik
International Conference on Machine Learning, 6692-6701, 2020
1402020
Variance reduced proxskip: Algorithm, theory and application to federated learning
G Malinovsky, K Yi, P Richtárik
Advances in Neural Information Processing Systems 35, 15176-15189, 2022
362022
A guide through the zoo of biased SGD
Y Demidovich, G Malinovsky, I Sokolov, P Richtárik
Advances in Neural Information Processing Systems 36, 23158-23171, 2023
302023
Server-side stepsizes and sampling without replacement provably help in federated optimization
G Malinovsky, K Mishchenko, P Richtárik
Proceedings of the 4th International Workshop on Distributed Machine …, 2023
272023
Federated optimization algorithms with random reshuffling and gradient compression
A Sadiev, G Malinovsky, E Gorbunov, I Sokolov, A Khaled, K Burlachenko, ...
arXiv preprint arXiv:2206.07021, 2022
272022
Can 5th generation local training methods support client sampling? yes!
M Grudzień, G Malinovsky, P Richtárik
International Conference on Artificial Intelligence and Statistics, 1055-1092, 2023
242023
Distributed proximal splitting algorithms with rates and acceleration
L Condat, G Malinovsky, P Richtárik
Frontiers in Signal Processing 1, 776825, 2022
232022
Random reshuffling with variance reduction: New analysis and better rates
G Malinovsky, A Sailanbayev, P Richtárik
Uncertainty in Artificial Intelligence, 1347-1357, 2023
192023
Tamuna: Accelerated federated learning with local training and partial participation
LP Condat, G Malinovsky, P Richtárik
arXiv, 2023
17*2023
Federated learning with regularized client participation
G Malinovsky, S Horváth, K Burlachenko, P Richtárik
arXiv preprint arXiv:2302.03662, 2023
162023
Improving accelerated federated learning with compression and importance sampling
M Grudzień, G Malinovsky, P Richtárik
arXiv preprint arXiv:2306.03240, 2023
142023
Byzantine robustness and partial participation can be achieved simultaneously: Just clip gradient differences
G Malinovsky, E Gorbunov, S Horváth, P Richtárik
Privacy Regulation and Protection in Machine Learning, 2023
82023
Federated random reshuffling with compression and variance reduction
G Malinovsky, P Richtárik
arXiv preprint arXiv:2205.03914, 2022
72022
MicroAdam: Accurate Adaptive Optimization with Low Space Overhead and Provable Convergence
IV Modoranu, M Safaryan, G Malinovsky, E Kurtic, T Robert, P Richtarik, ...
arXiv preprint arXiv:2405.15593, 2024
6*2024
An optimal algorithm for strongly convex min-min optimization
A Gasnikov, D Kovalev, G Malinovsky
arXiv preprint arXiv:2212.14439, 2022
62022
Averaged heavy-ball method Метод тяжелого шарика с усреднением
MY Danilova, GS Malinovsky
Izhevsk Institute of Computer Science, 2022
5*2022
Minibatch stochastic three points method for unconstrained smooth minimization
S Boucherouite, G Malinovsky, P Richtárik, EH Bergou
Proceedings of the AAAI Conference on Artificial Intelligence 38 (18), 20344 …, 2024
22024
Streamlining in the Riemannian Realm: Efficient Riemannian Optimization with Loopless Variance Reduction
Y Demidovich, G Malinovsky, P Richtárik
arXiv preprint arXiv:2403.06677, 2024
12024
Methods with Local Steps and Random Reshuffling for Generally Smooth Non-Convex Federated Optimization
Y Demidovich, P Ostroukhov, G Malinovsky, S Horváth, M Takáč, ...
arXiv preprint arXiv:2412.02781, 2024
2024
The system can't perform the operation now. Try again later.
Articles 1–20