SGD: General analysis and improved rates RM Gower, N Loizou, X Qian, A Sailanbayev, E Shulgin, P Richtárik International Conference on Machine Learning (ICML 2019), 5200-5209, 2019 | 497 | 2019 |
Acceleration for Compressed Gradient Descent in Distributed and Federated Optimization Z Li, D Kovalev, X Qian, P Richt{\'a}rik International Conference on Machine Learning (ICML 2020), 2020 | 167 | 2020 |
FedNL: Making Newton-type methods applicable to federated learning M Safaryan, R Islamov, X Qian, P Richtárik International Conference on Machine Learning (ICML 2022), 2021 | 91 | 2021 |
Distributed second order methods with fast rates and compressed communication R Islamov, X Qian, P Richtárik International Conference on Machine Learning (ICML 2021), 4617-4628, 2021 | 61 | 2021 |
Error compensated distributed SGD can be accelerated X Qian, P Richtárik, T Zhang Advances in Neural Information Processing Systems (NeurIPS 2021) 34, 2021 | 53 | 2021 |
L-SVRG and L-Katyusha with arbitrary sampling X Qian, Z Qu, PR rik Journal of Machine Learning Research 22, 1-49, 2021 | 37 | 2021 |
A model of distributionally robust two-stage stochastic convex programming with linear recourse B Li, X Qian, J Sun, KL Teo, C Yu Applied Mathematical Modelling 58, 86-97, 2018 | 37 | 2018 |
SAGA with arbitrary sampling X Qian, Z Qu, P Richtárik International Conference on Machine Learning (ICML 2019), 5190-5199, 2019 | 27 | 2019 |
Basis Matters: Better Communication-Efficient Second Order Methods for Federated Learning X Qian, R Islamov, M Safaryan, P Richtárik International Conference on Artificial Intelligence and Statistics (AISTATS'22), 2022 | 24 | 2022 |
MISO is making a comeback with better proofs and rates X Qian, A Sailanbayev, K Mishchenko, P Richtárik arXiv preprint arXiv:1906.01474, 2019 | 17 | 2019 |
Distributed Newton-Type Methods with Communication Compression and Bernoulli Aggregation R Islamov, X Qian, S Hanzely, M Safaryan, P Richtárik arXiv preprint arXiv:2206.03588, 2022 | 11 | 2022 |
Error compensated loopless SVRG, Quartz, and SDCA for distributed optimization X Qian, H Dong, P Richtárik, T Zhang arXiv preprint arXiv:2109.10049, 2021 | 5 | 2021 |
Error compensated loopless SVRG for distributed optimization X Qian, H Dong, P Richtárik, T Zhang OPT2020: 12th Annual Workshop on Optimization for Machine Learning (NeurIPS …, 2020 | 4 | 2020 |
The convergent generalized central paths for linearly constrained convex programming X Qian, LZ Liao, J Sun, H Zhu SIAM Journal on Optimization 28 (2), 1183-1204, 2018 | 4 | 2018 |
A strategy of global convergence for the affine scaling algorithm for convex semidefinite programming X Qian, LZ Liao, J Sun Mathematical Programming 179 (1), 1-19, 2020 | 3 | 2020 |
Analysis of some interior point continuous trajectories for convex programming X Qian, LZ Liao, J Sun Optimization 66 (4), 589-608, 2017 | 3 | 2017 |
Error compensated proximal SGD and RDA X Qian, H Dong, P Richtárik, T Zhang 12th Annual Workshop on Optimization for Machine Learning, 2020 | 2 | 2020 |
Analysis of the primal affine scaling continuous trajectory for convex programming X Qian, LZ Liao PACIFIC JOURNAL OF OPTIMIZATION 14 (2), 261-272, 2018 | 2 | 2018 |
An Interior Point Parameterized Central Path Following Algorithm for Linearly Constrained Convex Programming L Hou, X Qian, LZ Liao, J Sun Journal of Scientific Computing, 2022 | 1 | 2022 |
Generalized Affine Scaling Trajectory Analysis for Linearly Constrained Convex Programming X Qian, LZ Liao International Symposium on Neural Networks, 139-147, 2018 | 1 | 2018 |