Nonconvex optimization meets low-rank matrix factorization: An overview

Y Chi, YM Lu, Y Chen - IEEE Transactions on Signal …, 2019 - ieeexplore.ieee.org
Substantial progress has been made recently on develo** provably accurate and efficient
algorithms for low-rank matrix factorization via nonconvex optimization. While conventional …

Complete dictionary recovery over the sphere I: Overview and the geometric picture

J Sun, Q Qu, J Wright - IEEE Transactions on Information …, 2016 - ieeexplore.ieee.org
We consider the problem of recovering a complete (ie, square and invertible) matrix A 0,
from Y∈ R n× p with Y= A 0 X 0, provided X 0 is sufficiently sparse. This recovery problem is …

[BOOK][B] An introduction to optimization on smooth manifolds

N Boumal - 2023 - books.google.com
Optimization on Riemannian manifolds-the result of smooth geometry and optimization
merging into one elegant modern framework-spans many areas of science and engineering …

Matrix completion has no spurious local minimum

R Ge, JD Lee, T Ma - Advances in neural information …, 2016 - proceedings.neurips.cc
Matrix completion is a basic machine learning problem that has wide applications,
especially in collaborative filtering and recommender systems. Simple non-convex …

A geometric analysis of phase retrieval

J Sun, Q Qu, J Wright - Foundations of Computational Mathematics, 2018 - Springer
Can we recover a complex signal from its Fourier magnitudes? More generally, given a set
of m measurements, y_k=\left| a _k^* x\right| yk= ak∗ x for k= 1, ..., mk= 1,…, m, is it possible …

Stochastic model-based minimization of weakly convex functions

D Davis, D Drusvyatskiy - SIAM Journal on Optimization, 2019 - SIAM
We consider a family of algorithms that successively sample and minimize simple stochastic
models of the objective function. We show that under reasonable conditions on …

Global optimality of local search for low rank matrix recovery

S Bhojanapalli, B Neyshabur… - Advances in Neural …, 2016 - proceedings.neurips.cc
We show that there are no spurious local minima in the non-convex factorized
parametrization of low-rank matrix recovery from incoherent linear measurements. With …

Learning one-hidden-layer neural networks with landscape design

R Ge, JD Lee, T Ma - arxiv preprint arxiv:1711.00501, 2017 - arxiv.org
We consider the problem of learning a one-hidden-layer neural network: we assume the
input $ x\in\mathbb {R}^ d $ is from Gaussian distribution and the label $ y= a^\top\sigma …

Accelerated gradient descent escapes saddle points faster than gradient descent

C **, P Netrapalli, MI Jordan - Conference On Learning …, 2018 - proceedings.mlr.press
Nesterov's accelerated gradient descent (AGD), an instance of the general family of
“momentum methods,” provably achieves faster convergence rate than gradient descent …

Global rates of convergence for nonconvex optimization on manifolds

N Boumal, PA Absil, C Cartis - IMA Journal of Numerical …, 2019 - academic.oup.com
We consider the minimization of a cost function f on a manifold using Riemannian gradient
descent and Riemannian trust regions (RTR). We focus on satisfying necessary optimality …