Convex optimization for trajectory generation: A tutorial on generating dynamically feasible trajectories reliably and efficiently

D Malyuta, TP Reynolds, M Szmuk… - IEEE Control …, 2022 - ieeexplore.ieee.org
Reliable and efficient trajectory generation methods are a fundamental need for
autonomous dynamical systems. The goal of this article is to provide a comprehensive …

A survey on some recent developments of alternating direction method of multipliers

DR Han - Journal of the Operations Research Society of China, 2022 - Springer
Recently, alternating direction method of multipliers (ADMM) attracts much attentions from
various fields and there are many variant versions tailored for different models. Moreover, its …

On the global convergence of gradient descent for over-parameterized models using optimal transport

L Chizat, F Bach - Advances in neural information …, 2018 - proceedings.neurips.cc
Many tasks in machine learning and signal processing can be solved by minimizing a
convex function of a measure. This includes sparse spikes deconvolution or training a …

Gradient descent maximizes the margin of homogeneous neural networks

K Lyu, J Li - arxiv preprint arxiv:1906.05890, 2019 - arxiv.org
In this paper, we study the implicit regularization of the gradient descent algorithm in
homogeneous neural networks, including fully-connected and convolutional neural …

[PDF][PDF] Gradient descent only converges to minimizers

JD Lee, M Simchowitz, MI Jordan… - Conference on learning …, 2016 - proceedings.mlr.press
Gradient Descent Only Converges to Minimizers Page 1 JMLR: Workshop and Conference
Proceedings vol 49:1–12, 2016 Gradient Descent Only Converges to Minimizers Jason D. Lee …

First-order methods almost always avoid strict saddle points

JD Lee, I Panageas, G Piliouras, M Simchowitz… - Mathematical …, 2019 - Springer
We establish that first-order methods avoid strict saddle points for almost all initializations.
Our results apply to a wide variety of first-order methods, including (manifold) gradient …

Algorithmic regularization in learning deep homogeneous models: Layers are automatically balanced

SS Du, W Hu, JD Lee - Advances in neural information …, 2018 - proceedings.neurips.cc
We study the implicit regularization imposed by gradient descent for learning multi-layer
homogeneous functions including feed-forward fully connected and convolutional deep …

From error bounds to the complexity of first-order descent methods for convex functions

J Bolte, TP Nguyen, J Peypouquet, BW Suter - Mathematical Programming, 2017 - Springer
This paper shows that error bounds can be used as effective tools for deriving complexity
results for first-order descent methods in convex minimization. In a first stage, this objective …

Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward–backward splitting, and regularized Gauss–Seidel methods

H Attouch, J Bolte, BF Svaiter - Mathematical programming, 2013 - Springer
In view of the minimization of a nonsmooth nonconvex function f, we prove an abstract
convergence result for descent methods satisfying a sufficient-decrease assumption, and …

Proximal alternating minimization and projection methods for nonconvex problems: An approach based on the Kurdyka-Łojasiewicz inequality

H Attouch, J Bolte, P Redont… - … of operations research, 2010 - pubsonline.informs.org
We study the convergence properties of an alternating proximal minimization algorithm for
nonconvex structured functions of the type: L (x, y)= f (x)+ Q (x, y)+ g (y), where f and g are …