A survey of stochastic simulation and optimization methods in signal processing

M Pereyra, P Schniter, E Chouzenoux… - IEEE Journal of …, 2015 - ieeexplore.ieee.org
Modern signal processing (SP) methods rely very heavily on probability and statistics to
solve challenging SP problems. SP methods are now expected to deal with ever more …

A forward-backward splitting method for monotone inclusions without cocoercivity

Y Malitsky, MK Tam - SIAM Journal on Optimization, 2020 - SIAM
In this work, we propose a simple modification of the forward-backward splitting method for
finding a zero in the sum of two monotone operators. Our method converges under the same …

On the convergence of single-call stochastic extra-gradient methods

YG Hsieh, F Iutzeler, J Malick… - Advances in Neural …, 2019 - proceedings.neurips.cc
Variational inequalities have recently attracted considerable interest in machine learning as
a flexible paradigm for models that go beyond ordinary loss function minimization (such as …

Mini-batch semi-stochastic gradient descent in the proximal setting

J Konečný, J Liu, P Richtárik… - IEEE Journal of Selected …, 2015 - ieeexplore.ieee.org
We propose mS2GD: a method incorporating a mini-batching scheme for improving the
theoretical complexity and practical performance of semi-stochastic gradient descent …

Arock: an algorithmic framework for asynchronous parallel coordinate updates

Z Peng, Y Xu, M Yan, W Yin - SIAM Journal on Scientific Computing, 2016 - SIAM
Finding a fixed point to a nonexpansive operator, ie, x^*=Tx^*, abstracts many problems in
numerical linear algebra, optimization, and other areas of data science. To solve fixed-point …

Stochastic primal-dual hybrid gradient algorithm with arbitrary sampling and imaging applications

A Chambolle, MJ Ehrhardt, P Richtárik… - SIAM Journal on …, 2018 - SIAM
We propose a stochastic extension of the primal-dual hybrid gradient algorithm studied by
Chambolle and Pock in 2011 to solve saddle point problems that are separable in the dual …

Stochastic variance reduction for variational inequality methods

A Alacaoglu, Y Malitsky - Conference on Learning Theory, 2022 - proceedings.mlr.press
We propose stochastic variance reduced algorithms for solving convex-concave saddle
point problems, monotone variational inequalities, and monotone inclusions. Our framework …

Bandit learning in concave N-person games

M Bravo, D Leslie… - Advances in Neural …, 2018 - proceedings.neurips.cc
This paper examines the long-run behavior of learning with bandit feedback in non-
cooperative concave games. The bandit framework accounts for extremely low-information …

Bayesian computation: a summary of the current state, and samples backwards and forwards

PJ Green, K Łatuszyński, M Pereyra, CP Robert - Statistics and Computing, 2015 - Springer
Recent decades have seen enormous improvements in computational inference for
statistical models; there have been competitive continual enhancements in a wide range of …

[HTML][HTML] Convergence of sequences: A survey

B Franci, S Grammatico - Annual Reviews in Control, 2022 - Elsevier
Convergent sequences of real numbers play a fundamental role in many different problems
in system theory, eg, in Lyapunov stability analysis, as well as in optimization theory and …