A unified discretization framework for differential equation approach with Lyapunov arguments for convex optimization

K Ushiyama, S Sato, T Matsuo - Advances in Neural …, 2023 - proceedings.neurips.cc
The differential equation (DE) approach for convex optimization, which relates optimization
methods to specific continuous DEs with rate-revealing Lyapunov functionals, has gained …

Continuous vs. discrete optimization of deep neural networks

O Elkabetz, N Cohen - Advances in Neural Information …, 2021 - proceedings.neurips.cc
Existing analyses of optimization in deep learning are either continuous, focusing on
(variants of) gradient flow, or discrete, directly treating (variants of) gradient descent …

Continuous-time analysis of accelerated gradient methods via conservation laws in dilated coordinate systems

JJ Suh, G Roh, EK Ryu - International Conference on …, 2022 - proceedings.mlr.press
We analyze continuous-time models of accelerated gradient methods through deriving
conservation laws in dilated coordinate systems. Namely, instead of analyzing the dynamics …

Infotainment enabled smart cars: A joint communication, caching, and computation approach

SMA Kazmi, TN Dang, I Yaqoob… - IEEE Transactions …, 2019 - ieeexplore.ieee.org
Remarkable prevalence of cloud computing has enabled smart cars to provide infotainment
services. However, retrieving infotainment contents from long-distance data centers poses a …

Finite-time convergence in continuous-time optimization

O Romero, M Benosman - International conference on …, 2020 - proceedings.mlr.press
In this paper, we investigate a Lyapunov-like differential inequality that allows us to establish
finite-time stability of a continuous-time state-space dynamical system represented via a …

Accelerated primal-dual methods for linearly constrained convex optimization problems

H Luo - arxiv preprint arxiv:2109.12604, 2021 - arxiv.org
This work proposes an accelerated primal-dual dynamical system for affine constrained
convex optimization and presents a class of primal-dual methods with nonergodic …

On dissipative symplectic integration with applications to gradient-based optimization

G França, MI Jordan, R Vidal - Journal of Statistical Mechanics …, 2021 - iopscience.iop.org
Recently, continuous-time dynamical systems have proved useful in providing conceptual
and quantitative insights into gradient-based optimization, widely used in modern machine …

Finite-sample analysis of nonlinear stochastic approximation with applications in reinforcement learning

Z Chen, S Zhang, TT Doan, JP Clarke, ST Maguluri - Automatica, 2022 - Elsevier
Motivated by applications in reinforcement learning (RL), we study a nonlinear stochastic
approximation (SA) algorithm under Markovian noise, and establish its finite-sample …

Learning-accelerated ADMM for distributed DC optimal power flow

D Biagioni, P Graf, X Zhang, AS Zamzam… - IEEE Control …, 2020 - ieeexplore.ieee.org
We propose a novel data-driven method to accelerate the convergence of Alternating
Direction Method of Multipliers (ADMM) for solving distributed DC optimal power flow (DC …

Conformal symplectic and relativistic optimization

G França, J Sulam, D Robinson… - Advances in Neural …, 2020 - proceedings.neurips.cc
Arguably, the two most popular accelerated or momentum-based optimization methods are
Nesterov's accelerated gradient and Polyaks's heavy ball, both corresponding to different …