Randomized numerical linear algebra: A perspective on the field with an eye to software
Randomized numerical linear algebra-RandNLA, for short-concerns the use of
randomization as a resource to develop improved algorithms for large-scale linear algebra …
randomization as a resource to develop improved algorithms for large-scale linear algebra …
Challenges in training PINNs: A loss landscape perspective
This paper explores challenges in training Physics-Informed Neural Networks (PINNs),
emphasizing the role of the loss landscape in the training process. We examine difficulties in …
emphasizing the role of the loss landscape in the training process. We examine difficulties in …
Recent and upcoming developments in randomized numerical linear algebra for machine learning
Large matrices arise in many machine learning and data analysis applications, including as
representations of datasets, graphs, model weights, and first and second-order derivatives …
representations of datasets, graphs, model weights, and first and second-order derivatives …
Solving dense linear systems faster than via preconditioning
We give a stochastic optimization algorithm that solves a dense n× n real-valued linear
system Ax= b, returning x such that|| A x− b||≤ є|| b|| in time: Õ ((n 2+ nk ω− 1) log1/є), where …
system Ax= b, returning x such that|| A x− b||≤ є|| b|| in time: Õ ((n 2+ nk ω− 1) log1/є), where …
Promise: Preconditioned stochastic optimization methods by incorporating scalable curvature estimates
Ill-conditioned problems are ubiquitous in large-scale machine learning: as a data set grows
to include more and more features correlated with the labels, the condition number …
to include more and more features correlated with the labels, the condition number …
Nyström method for accurate and scalable implicit differentiation
The essential difficulty of gradient-based bilevel optimization using implicit differentiation is
to estimate the inverse Hessian vector product with respect to neural network parameters …
to estimate the inverse Hessian vector product with respect to neural network parameters …
NysADMM: faster composite convex optimization via low-rank approximation
This paper develops a scalable new algorithm, called NysADMM, to minimize a smooth
convex loss function with a convex regularizer. NysADMM accelerates the inexact …
convex loss function with a convex regularizer. NysADMM accelerates the inexact …
Linear-Scaling kernels for protein sequences and small molecules outperform deep learning while providing uncertainty quantitation and improved interpretability
J Parkinson, W Wang - Journal of Chemical Information and …, 2023 - ACS Publications
Gaussian process (GP) is a Bayesian model which provides several advantages for
regression tasks in machine learning such as reliable quantitation of uncertainty and …
regression tasks in machine learning such as reliable quantitation of uncertainty and …
Randomized low-rank approximation of monotone matrix functions
This work is concerned with computing low-rank approximations of a matrix function for a
large symmetric positive semidefinite matrix, a task that arises in, eg, statistical learning and …
large symmetric positive semidefinite matrix, a task that arises in, eg, statistical learning and …
Faster linear systems and matrix norm approximation via multi-level sketched preconditioning
We present a new class of preconditioned iterative methods for solving linear systems of the
form Ax= b. Our methods are based on constructing a low-rank Nyström approximation to A …
form Ax= b. Our methods are based on constructing a low-rank Nyström approximation to A …