Online learning: A comprehensive survey

SCH Hoi, D Sahoo, J Lu, P Zhao - Neurocomputing, 2021 - Elsevier
Online learning represents a family of machine learning methods, where a learner attempts
to tackle some predictive (or any type of decision-making) task by learning from a sequence …

Adaptive gradient-based meta-learning methods

M Khodak, MFF Balcan… - Advances in Neural …, 2019 - proceedings.neurips.cc
We build a theoretical framework for designing and understanding practical meta-learning
methods that integrates sophisticated formalizations of task-similarity with the extensive …

A reduction of imitation learning and structured prediction to no-regret online learning

S Ross, G Gordon, D Bagnell - Proceedings of the fourteenth …, 2011 - proceedings.mlr.press
Sequential prediction problems such as imitation learning, where future observations
depend on previous predictions (actions), violate the common iid assumptions made in …

High probability convergence of stochastic gradient methods

Z Liu, TD Nguyen, TH Nguyen… - … on Machine Learning, 2023 - proceedings.mlr.press
In this work, we describe a generic approach to show convergence with high probability for
both stochastic convex and non-convex optimization with sub-Gaussian noise. In previous …

On the convergence of adaptive gradient methods for nonconvex optimization

D Zhou, J Chen, Y Cao, Z Yang, Q Gu - ar**
E Gorbunov, M Danilova… - Advances in Neural …, 2020 - proceedings.neurips.cc
In this paper, we propose a new accelerated stochastic first-order method called clipped-
SSTM for smooth convex stochastic optimization with heavy-tailed distributed noise in …

[PDF][PDF] Composite objective mirror descent.

JC Duchi, S Shalev-Shwartz, Y Singer, A Tewari - Colt, 2010 - Citeseer
We present a new method for regularized convex optimization and analyze it under both
online and stochastic optimization settings. In addition to unifying previously known firstorder …