[HTML][HTML] Log-concavity and strong log-concavity: a review

A Saumard, JA Wellner - Statistics surveys, 2014 - ncbi.nlm.nih.gov
We review and formulate results concerning log-concavity and strong-log-concavity in both
discrete and continuous settings. We show how preservation of log-concavity and strongly …

Introduction to the non-asymptotic analysis of random matrices

R Vershynin - arxiv preprint arxiv:1011.3027, 2010 - arxiv.org
This is a tutorial on some basic non-asymptotic methods and concepts in random matrix
theory. The reader will learn several tools for the analysis of the extreme singular values of …

Concentration inequalities

S Boucheron, G Lugosi, O Bousquet - Summer school on machine learning, 2003 - Springer
Concentration inequalities deal with deviations of functions of independent random
variables from their expectation. In the last decade new tools have been introduced making …

Algorithmic regularization in over-parameterized matrix sensing and neural networks with quadratic activations

Y Li, T Ma, H Zhang - Conference On Learning Theory, 2018 - proceedings.mlr.press
We show that the gradient descent algorithm provides an implicit regularization effect in the
learning of over-parameterized matrix factorization models and one-hidden-layer neural …

[BOOK][B] Upper and lower bounds for stochastic processes

M Talagrand - 2014 - Springer
This book had a previous edition [132]. The changes between the two editions are not only
cosmetic or pedagogical, and the degree of improvement in the mathematics themselves is …

Statistical, robustness, and computational guarantees for sliced wasserstein distances

S Nietert, Z Goldfeld, R Sadhu… - Advances in Neural …, 2022 - proceedings.neurips.cc
Sliced Wasserstein distances preserve properties of classic Wasserstein distances while
being more scalable for computation and estimation in high dimensions. The goal of this …

Beyond ntk with vanilla gradient descent: A mean-field analysis of neural networks with polynomial width, samples, and time

A Mahankali, H Zhang, K Dong… - Advances in Neural …, 2023 - proceedings.neurips.cc
Despite recent theoretical progress on the non-convex optimization of two-layer neural
networks, it is still an open question whether gradient descent on neural networks without …

[BOOK][B] Geometry of isotropic convex bodies

S Brazitikos, A Giannopoulos, P Valettas, BH Vritsiou - 2014 - books.google.com
The study of high-dimensional convex bodies from a geometric and analytic point of view,
with an emphasis on the dependence of various parameters on the dimension stands at the …

[BOOK][B] Eigenvalue distribution of large random matrices

LA Pastur, M Shcherbina - 2011 - books.google.com
Random matrix theory is a wide and growing field with a variety of concepts, results, and
techniques and a vast range of applications in mathematics and the related sciences. The …

Non-asymptotic theory of random matrices: extreme singular values

M Rudelson, R Vershynin - … of Mathematicians 2010 (ICM 2010) (In …, 2010 - World Scientific
The classical random matrix theory is mostly focused on asymptotic spectral properties of
random matrices as their dimensions grow to infinity. At the same time many recent …