Nonconvex optimization meets low-rank matrix factorization: An overview

Y Chi, YM Lu, Y Chen - IEEE Transactions on Signal …, 2019 - ieeexplore.ieee.org
Substantial progress has been made recently on develo** provably accurate and efficient
algorithms for low-rank matrix factorization via nonconvex optimization. While conventional …

Complete dictionary recovery over the sphere I: Overview and the geometric picture

J Sun, Q Qu, J Wright - IEEE Transactions on Information …, 2016 - ieeexplore.ieee.org
We consider the problem of recovering a complete (ie, square and invertible) matrix A 0,
from Y∈ R n× p with Y= A 0 X 0, provided X 0 is sufficiently sparse. This recovery problem is …

[LIVRE][B] An introduction to optimization on smooth manifolds

N Boumal - 2023 - books.google.com
Optimization on Riemannian manifolds-the result of smooth geometry and optimization
merging into one elegant modern framework-spans many areas of science and engineering …

A geometric analysis of neural collapse with unconstrained features

Z Zhu, T Ding, J Zhou, X Li, C You… - Advances in Neural …, 2021 - proceedings.neurips.cc
We provide the first global optimization landscape analysis of Neural Collapse--an intriguing
empirical phenomenon that arises in the last-layer classifiers and features of neural …

Optimistic mirror descent in saddle-point problems: Going the extra (gradient) mile

P Mertikopoulos, B Lecouat, H Zenati, CS Foo… - arxiv preprint arxiv …, 2018 - arxiv.org
Owing to their connection with generative adversarial networks (GANs), saddle-point
problems have recently attracted considerable interest in machine learning and beyond. By …

[PDF][PDF] Gradient descent only converges to minimizers

JD Lee, M Simchowitz, MI Jordan… - Conference on learning …, 2016 - proceedings.mlr.press
Gradient Descent Only Converges to Minimizers Page 1 JMLR: Workshop and Conference
Proceedings vol 49:1–12, 2016 Gradient Descent Only Converges to Minimizers Jason D. Lee …

First-order methods almost always avoid strict saddle points

JD Lee, I Panageas, G Piliouras, M Simchowitz… - Mathematical …, 2019 - Springer
We establish that first-order methods avoid strict saddle points for almost all initializations.
Our results apply to a wide variety of first-order methods, including (manifold) gradient …

Global rates of convergence for nonconvex optimization on manifolds

N Boumal, PA Absil, C Cartis - IMA Journal of Numerical …, 2019 - academic.oup.com
We consider the minimization of a cost function f on a manifold using Riemannian gradient
descent and Riemannian trust regions (RTR). We focus on satisfying necessary optimality …

First-order methods for geodesically convex optimization

H Zhang, S Sra - Conference on learning theory, 2016 - proceedings.mlr.press
Geodesic convexity generalizes the notion of (vector space) convexity to nonlinear metric
spaces. But unlike convex optimization, geodesically convex (g-convex) optimization is …

Riemannian SVRG: Fast stochastic optimization on Riemannian manifolds

H Zhang, SJ Reddi, S Sra - Advances in Neural …, 2016 - proceedings.neurips.cc
We study optimization of finite sums of\emph {geodesically} smooth functions on
Riemannian manifolds. Although variance reduction techniques for optimizing finite-sums …