Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Randomized numerical linear algebra: A perspective on the field with an eye to software
Randomized numerical linear algebra-RandNLA, for short-concerns the use of
randomization as a resource to develop improved algorithms for large-scale linear algebra …
randomization as a resource to develop improved algorithms for large-scale linear algebra …
Randomized nyström preconditioning
This paper introduces the Nyström preconditioned conjugate gradient (PCG) algorithm for
solving a symmetric positive-definite linear system. The algorithm applies the randomized …
solving a symmetric positive-definite linear system. The algorithm applies the randomized …
Simpler is better: a comparative study of randomized pivoting algorithms for CUR and interpolative decompositions
Matrix skeletonizations like the interpolative and CUR decompositions provide a framework
for low-rank approximation in which subsets of a given matrix's columns and/or rows are …
for low-rank approximation in which subsets of a given matrix's columns and/or rows are …
Randomly pivoted Cholesky: Practical approximation of a kernel matrix with few entry evaluations
The randomly pivoted Cholesky algorithm (RPCholesky) computes a factorized rank‐kk
approximation of an N× NN*N positive‐semidefinite (psd) matrix. RPCholesky requires only …
approximation of an N× NN*N positive‐semidefinite (psd) matrix. RPCholesky requires only …
Exact expressions for double descent and implicit regularization via surrogate random design
Double descent refers to the phase transition that is exhibited by the generalization error of
unregularized learning models when varying the ratio between the number of parameters …
unregularized learning models when varying the ratio between the number of parameters …
Recent and upcoming developments in randomized numerical linear algebra for machine learning
Large matrices arise in many machine learning and data analysis applications, including as
representations of datasets, graphs, model weights, and first and second-order derivatives …
representations of datasets, graphs, model weights, and first and second-order derivatives …
Taxonomizing local versus global structure in neural network loss landscapes
Viewing neural network models in terms of their loss landscapes has a long history in the
statistical mechanics approach to learning, and in recent years it has received attention …
statistical mechanics approach to learning, and in recent years it has received attention …
Sharp analysis of sketch-and-project methods via a connection to randomized singular value decomposition
Sketch-and-project is a framework which unifies many known iterative methods for solving
linear systems and their variants, as well as further extensions to nonlinear optimization …
linear systems and their variants, as well as further extensions to nonlinear optimization …
Solving dense linear systems faster than via preconditioning
We give a stochastic optimization algorithm that solves a dense n× n real-valued linear
system Ax= b, returning x such that|| A x− b||≤ є|| b|| in time: Õ ((n 2+ nk ω− 1) log1/є), where …
system Ax= b, returning x such that|| A x− b||≤ є|| b|| in time: Õ ((n 2+ nk ω− 1) log1/є), where …
Large-scale non-negative subspace clustering based on nyström approximation
H Jia, Q Ren, L Huang, Q Mao, L Wang, H Song - Information Sciences, 2023 - Elsevier
Large-scale subspace clustering usually drops the requirements of the full similarity matrix
and Laplacian matrix but constructs the anchor affinity matrix and uses matrix approximation …
and Laplacian matrix but constructs the anchor affinity matrix and uses matrix approximation …