Nonconvex optimization meets low-rank matrix factorization: An overview
Substantial progress has been made recently on develo** provably accurate and efficient
algorithms for low-rank matrix factorization via nonconvex optimization. While conventional …
algorithms for low-rank matrix factorization via nonconvex optimization. While conventional …
Complete dictionary recovery over the sphere I: Overview and the geometric picture
We consider the problem of recovering a complete (ie, square and invertible) matrix A 0,
from Y∈ R n× p with Y= A 0 X 0, provided X 0 is sufficiently sparse. This recovery problem is …
from Y∈ R n× p with Y= A 0 X 0, provided X 0 is sufficiently sparse. This recovery problem is …
[BOOK][B] An introduction to optimization on smooth manifolds
N Boumal - 2023 - books.google.com
Optimization on Riemannian manifolds-the result of smooth geometry and optimization
merging into one elegant modern framework-spans many areas of science and engineering …
merging into one elegant modern framework-spans many areas of science and engineering …
Matrix completion has no spurious local minimum
Matrix completion is a basic machine learning problem that has wide applications,
especially in collaborative filtering and recommender systems. Simple non-convex …
especially in collaborative filtering and recommender systems. Simple non-convex …
A geometric analysis of phase retrieval
Can we recover a complex signal from its Fourier magnitudes? More generally, given a set
of m measurements, y_k=\left| a _k^* x\right| yk= ak∗ x for k= 1, ..., mk= 1,…, m, is it possible …
of m measurements, y_k=\left| a _k^* x\right| yk= ak∗ x for k= 1, ..., mk= 1,…, m, is it possible …
Stochastic model-based minimization of weakly convex functions
We consider a family of algorithms that successively sample and minimize simple stochastic
models of the objective function. We show that under reasonable conditions on …
models of the objective function. We show that under reasonable conditions on …
Global optimality of local search for low rank matrix recovery
We show that there are no spurious local minima in the non-convex factorized
parametrization of low-rank matrix recovery from incoherent linear measurements. With …
parametrization of low-rank matrix recovery from incoherent linear measurements. With …
Learning one-hidden-layer neural networks with landscape design
We consider the problem of learning a one-hidden-layer neural network: we assume the
input $ x\in\mathbb {R}^ d $ is from Gaussian distribution and the label $ y= a^\top\sigma …
input $ x\in\mathbb {R}^ d $ is from Gaussian distribution and the label $ y= a^\top\sigma …
Accelerated gradient descent escapes saddle points faster than gradient descent
Nesterov's accelerated gradient descent (AGD), an instance of the general family of
“momentum methods,” provably achieves faster convergence rate than gradient descent …
“momentum methods,” provably achieves faster convergence rate than gradient descent …
Global rates of convergence for nonconvex optimization on manifolds
We consider the minimization of a cost function f on a manifold using Riemannian gradient
descent and Riemannian trust regions (RTR). We focus on satisfying necessary optimality …
descent and Riemannian trust regions (RTR). We focus on satisfying necessary optimality …