RandNLA: randomized numerical linear algebra

P Drineas, MW Mahoney - Communications of the ACM, 2016 - dl.acm.org
RandNLA: randomized numerical linear algebra Page 1 80 COMMUNICATIONS OF THE ACM
| JUNE 2016 | VOL. 59 | NO. 6 review articles DOI:10.1145/2842602 Randomization offers new …

An introduction to matrix concentration inequalities

JA Tropp - Foundations and Trends® in Machine Learning, 2015 - nowpublishers.com
Random matrices now play a role in many areas of theoretical, applied, and computational
mathematics. Therefore, it is desirable to have tools for studying random matrices that are …

Data-dependent coresets for compressing neural networks with applications to generalization bounds

C Baykal, L Liebenwein, I Gilitschenski… - arxiv preprint arxiv …, 2018 - arxiv.org
We present an efficient coresets-based neural network compression algorithm that sparsifies
the parameters of a trained fully-connected neural network in a manner that provably …

Data-independent neural pruning via coresets

B Mussay, M Osadchy, V Braverman, S Zhou… - arxiv preprint arxiv …, 2019 - arxiv.org
Previous work showed empirically that large neural networks can be significantly reduced in
size while preserving their accuracy. Model compression became a central research topic …

An improved classical singular value transformation for quantum machine learning

A Bakshi, E Tang - Proceedings of the 2024 Annual ACM-SIAM …, 2024 - SIAM
The field of quantum machine learning (QML) produces many proposals for attaining
quantum speedups for tasks in machine learning and data analysis. Such speedups can …

[PDF][PDF] User-friendly tools for random matrices: An introduction

JA Tropp - NIPS Tutorial, 2012 - Citeseer
Nota Bene: This manuscript has not yet reached its final form. In particular, I have not had
the opportunity to check all the details carefully and to polish the writing so that it reflects the …

Data-independent structured pruning of neural networks via coresets

B Mussay, D Feldman, S Zhou… - … on Neural Networks …, 2021 - ieeexplore.ieee.org
Model compression is crucial for the deployment of neural networks on devices with limited
computational and memory resources. Many different methods show comparable accuracy …

Detection thresholds in very sparse matrix completion

C Bordenave, S Coste, RR Nadakuditi - Foundations of Computational …, 2023 - Springer
We study the matrix completion problem: an underlying m× n matrix P is low rank, with
incoherent singular vectors, and a random m× n matrix A is equal to P on a (uniformly) …

Near-optimal entrywise sampling of numerically sparse matrices

V Braverman, R Krauthgamer… - … on Learning Theory, 2021 - proceedings.mlr.press
Many real-world data sets are sparse or almost sparse. One method to measure this for a
matrix $ A\in\mathbb {R}^{n\times n} $ is the\emph {numerical sparsity}, denoted $\mathsf …

Exploiting numerical sparsity for efficient learning: faster eigenvector computation and regression

N Gupta, A Sidford - Advances in Neural Information …, 2018 - proceedings.neurips.cc
In this paper, we obtain improved running times for regression and top eigenvector
computation for numerically sparse matrices. Given a data matrix $\mat {A}\in\R^{n\times d} …