Follow
Mher Safaryan
Mher Safaryan
Postdoctoral MSCA Fellow, IST Austria
Verified email at ist.ac.at - Homepage
Title
Cited by
Cited by
Year
On Biased Compression for Distributed Learning
A Beznosikov, S Horváth, P Richtárik, M Safaryan
Journal of Machine Learning Research (JMLR), 2023, 2020
1952020
FedNL: Making Newton-type methods applicable to federated learning
M Safaryan, R Islamov, X Qian, P Richtárik
International Conference on Machine Learning (ICML), 2022, 2021
912021
Optimal Gradient Compression for Distributed and Federated Learning
A Albasyoni, M Safaryan, L Condat, P Richtárik
arXiv preprint arXiv:2010.03246, 2020
642020
Uncertainty principle for communication compression in distributed and federated learning and the search for an optimal compressor
M Safaryan, E Shulgin, P Richtárik
Information and Inference: A Journal of the IMA, 2021, 2020
642020
Stochastic Sign Descent Methods: New Algorithms and Better Theory
M Safaryan, P Richtárik
International Conference on Machine Learning (ICML), 2021, 2019
55*2019
Smoothness matrices beat smoothness constants: Better communication compression techniques for distributed optimization
M Safaryan, F Hanzely, P Richtárik
Advances in Neural Information Processing Systems (NeurIPS) 34, 25688-25702, 2021
322021
Basis Matters: Better Communication-Efficient Second Order Methods for Federated Learning
X Qian, R Islamov, M Safaryan, P Richtárik
International Conference on Artificial Intelligence and Statistics (AISTATS …, 2021
242021
Construction of free g-dimonoids
Y Movsisyan, S Davidov, M Safaryan
Algebra and discrete mathematics, 2014
182014
Theoretically Better and Numerically Faster Distributed Optimization with Smoothness-Aware Quantization Techniques
B Wang, M Safaryan, P Richtárik
Advances in Neural Information Processing Systems (NeurIPS) 2022, 2021
15*2021
AsGrad: A Sharp Unified Analysis of Asynchronous-SGD Algorithms
R Islamov, M Safaryan, D Alistarh
International Conference on Artificial Intelligence and Statistics (AISTATS …, 2023
112023
Gradskip: Communication-accelerated local gradient methods with better computational complexity
A Maranjyan, M Safaryan, P Richtárik
arXiv preprint arXiv:2210.16402, 2022
112022
Distributed Newton-type methods with communication compression and bernoulli aggregation
R Islamov, X Qian, S Hanzely, M Safaryan, P Richtárik
Transactions on Machine Learning Research (TMLR), 2023, 2022
112022
On generalizations of Fatou’s theorem for the integrals with general kernels
GA Karagulyan, MH Safaryan
The Journal of Geometric Analysis 25, 1459-1475, 2015
102015
On Generalizations of Fatou’s Theorem in for Convolution Integrals with General Kernels
MH Safaryan
The Journal of Geometric Analysis 31 (4), 3280-3299, 2021
82021
On a theorem of Littlewood
GA Karagulyan, MH Safaryan
Hokkaido Mathematical Journal 46 (1), 87-106, 2017
62017
On an equivalence for differentiation bases of dyadic rectangles
GA Karagulyan, DA Karagulyan, MH Safaryan
Colloq. Math 146 (2), 295-307, 2017
62017
On an equivalency of rare differentiation bases of rectangles
MH Safaryan
Journal of Contemporary Mathematical Analysis (Armenian Academy of Sciences …, 2018
52018
Knowledge Distillation Performs Partial Variance Reduction
M Safaryan, A Peste, D Alistarh
Advances in Neural Information Processing Systems (NeurIPS) 2023, 2023
32023
LDAdam: Adaptive Optimization from Low-dimensional Gradient Statistics
T Robert, M Safaryan, IV Modoranu, D Alistarh
International Conference on Learning Representations (ICLR), 2025, 2024
22024
MicroAdam: Accurate Adaptive Optimization with Low Space Overhead and Provable Convergence
IV Modoranu, M Safaryan, G Malinovsky, E Kurtic, T Robert, P Richtarik, ...
Advances in Neural Information Processing Systems (NeurIPS) 2024, 2024
12024
The system can't perform the operation now. Try again later.
Articles 1–20