Projected reflected gradient methods for monotone variational inequalities Y Malitsky SIAM Journal on Optimization 25 (1), 502-520, 2015 | 396 | 2015 |
A forward-backward splitting method for monotone inclusions without cocoercivity Y Malitsky, MK Tam SIAM Journal on Optimization 30 (2), 1451-1472, 2020 | 281 | 2020 |
Golden ratio algorithms for variational inequalities Y Malitsky Mathematical Programming 184 (1), 383-410, 2020 | 216 | 2020 |
A first-order primal-dual algorithm with linesearch Y Malitsky, T Pock SIAM Journal on Optimization 28 (1), 411-432, 2018 | 160 | 2018 |
An extragradient algorithm for monotone variational inequalities YV Malitsky, VV Semenov Cybernetics and Systems Analysis 50 (2), 271-277, 2014 | 149 | 2014 |
A hybrid method without extrapolation step for solving variational inequality problems YV Malitsky, VV Semenov Journal of Global Optimization 61 (1), 193-202, 2015 | 136 | 2015 |
Adaptive Gradient Descent without Descent Y Malitsky, K Mishchenko International Conference on Machine Learning 119, 6702-6712, 2020 | 135 | 2020 |
Revisiting stochastic extragradient K Mishchenko, D Kovalev, E Shulgin, P Richtárik, Y Malitsky International Conference on Artificial Intelligence and Statistics, 4573-4582, 2020 | 99 | 2020 |
Stochastic variance reduction for variational inequality methods A Alacaoglu, Y Malitsky Conference on Learning Theory, 778-816, 2022 | 92 | 2022 |
Shadow Douglas–Rachford splitting for monotone inclusions ER Csetnek, Y Malitsky, MK Tam Applied Mathematics & Optimization 80, 665-678, 2019 | 74 | 2019 |
Proximal extrapolated gradient methods for variational inequalities Y Malitsky Optimization Methods and Software 33 (1), 140-164, 2018 | 64 | 2018 |
A new regret analysis for Adam-type algorithms A Alacaoglu, Y Malitsky, P Mertikopoulos, V Cevher International conference on machine learning, 202-210, 2020 | 55 | 2020 |
Resolvent splitting for sums of monotone operators with minimal lifting Y Malitsky, MK Tam Mathematical Programming 201 (1), 231-262, 2023 | 30 | 2023 |
Forward-reflected-backward method with variance reduction A Alacaoglu, Y Malitsky, V Cevher Computational optimization and applications 80 (2), 321-346, 2021 | 28 | 2021 |
The primal-dual hybrid gradient method reduces to a primal method for linearly constrained optimization problems Y Malitsky arXiv preprint arXiv:1706.02602, 2017 | 21* | 2017 |
Convergence of adaptive algorithms for constrained weakly convex optimization A Alacaoglu, Y Malitsky, V Cevher Advances in Neural Information Processing Systems 34, 14214-14225, 2021 | 20* | 2021 |
A first-order primal-dual method with adaptivity to local smoothness ML Vladarean, Y Malitsky, V Cevher Advances in neural information processing systems 34, 6171-6182, 2021 | 17 | 2021 |
Adaptive proximal gradient method for convex optimization Y Malitsky, K Mishchenko Advances in Neural Information Processing Systems 37, 100670-100697, 2025 | 16 | 2025 |
Beyond the golden ratio for variational inequality algorithms A Alacaoglu, A Böhm, Y Malitsky Journal of machine learning research 24 (172), 1-33, 2023 | 16 | 2023 |
Over-the-air computation for distributed systems: Something old and something new Z Chen, EG Larsson, C Fischione, M Johansson, Y Malitsky IEEE Network 37 (5), 240-246, 2023 | 14 | 2023 |