Randomized numerical linear algebra: A perspective on the field with an eye to software

R Murray, J Demmel, MW Mahoney… - arxiv preprint arxiv …, 2023 - arxiv.org
Randomized numerical linear algebra-RandNLA, for short-concerns the use of
randomization as a resource to develop improved algorithms for large-scale linear algebra …

Challenges in training PINNs: A loss landscape perspective

P Rathore, W Lei, Z Frangella, L Lu, M Udell - arxiv preprint arxiv …, 2024 - arxiv.org
This paper explores challenges in training Physics-Informed Neural Networks (PINNs),
emphasizing the role of the loss landscape in the training process. We examine difficulties in …

Recent and upcoming developments in randomized numerical linear algebra for machine learning

M Dereziński, MW Mahoney - Proceedings of the 30th ACM SIGKDD …, 2024 - dl.acm.org
Large matrices arise in many machine learning and data analysis applications, including as
representations of datasets, graphs, model weights, and first and second-order derivatives …

Solving dense linear systems faster than via preconditioning

M Dereziński, J Yang - Proceedings of the 56th Annual ACM Symposium …, 2024 - dl.acm.org
We give a stochastic optimization algorithm that solves a dense n× n real-valued linear
system Ax= b, returning x such that|| A x− b||≤ є|| b|| in time: Õ ((n 2+ nk ω− 1) log1/є), where …

Promise: Preconditioned stochastic optimization methods by incorporating scalable curvature estimates

Z Frangella, P Rathore, S Zhao, M Udell - Journal of Machine Learning …, 2024 - jmlr.org
Ill-conditioned problems are ubiquitous in large-scale machine learning: as a data set grows
to include more and more features correlated with the labels, the condition number …

Nyström method for accurate and scalable implicit differentiation

R Hataya, M Yamada - International Conference on Artificial …, 2023 - proceedings.mlr.press
The essential difficulty of gradient-based bilevel optimization using implicit differentiation is
to estimate the inverse Hessian vector product with respect to neural network parameters …

NysADMM: faster composite convex optimization via low-rank approximation

S Zhao, Z Frangella, M Udell - International Conference on …, 2022 - proceedings.mlr.press
This paper develops a scalable new algorithm, called NysADMM, to minimize a smooth
convex loss function with a convex regularizer. NysADMM accelerates the inexact …

Linear-Scaling kernels for protein sequences and small molecules outperform deep learning while providing uncertainty quantitation and improved interpretability

J Parkinson, W Wang - Journal of Chemical Information and …, 2023 - ACS Publications
Gaussian process (GP) is a Bayesian model which provides several advantages for
regression tasks in machine learning such as reliable quantitation of uncertainty and …

Randomized low-rank approximation of monotone matrix functions

D Persson, D Kressner - SIAM Journal on Matrix Analysis and Applications, 2023 - SIAM
This work is concerned with computing low-rank approximations of a matrix function for a
large symmetric positive semidefinite matrix, a task that arises in, eg, statistical learning and …

Faster linear systems and matrix norm approximation via multi-level sketched preconditioning

M Dereziński, C Musco, J Yang - Proceedings of the 2025 Annual ACM-SIAM …, 2025 - SIAM
We present a new class of preconditioned iterative methods for solving linear systems of the
form Ax= b. Our methods are based on constructing a low-rank Nyström approximation to A …