Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
A theoretical perspective on hyperdimensional computing
Hyperdimensional (HD) computing is a set of neurally inspired methods for obtaining
highdimensional, low-precision, distributed representations of data. These representations …
highdimensional, low-precision, distributed representations of data. These representations …
Concentration inequalities for statistical inference
H Zhang, SX Chen - arxiv preprint arxiv:2011.02258, 2020 - arxiv.org
This paper gives a review of concentration inequalities which are widely employed in non-
asymptotical analyses of mathematical statistics in a wide range of settings, from distribution …
asymptotical analyses of mathematical statistics in a wide range of settings, from distribution …
An efficient framework for clustered federated learning
We address the problem of Federated Learning (FL) where users are distributed and
partitioned into clusters. This setup captures settings where different groups of users have …
partitioned into clusters. This setup captures settings where different groups of users have …
Benign overfitting in linear regression
The phenomenon of benign overfitting is one of the key mysteries uncovered by deep
learning methodology: deep neural networks seem to predict well, even with a perfect fit to …
learning methodology: deep neural networks seem to predict well, even with a perfect fit to …
[KNIHA][B] High-dimensional probability: An introduction with applications in data science
R Vershynin - 2018 - books.google.com
High-dimensional probability offers insight into the behavior of random vectors, random
matrices, random subspaces, and objects used to quantify uncertainty in high dimensions …
matrices, random subspaces, and objects used to quantify uncertainty in high dimensions …
A modern maximum-likelihood theory for high-dimensional logistic regression
Students in statistics or data science usually learn early on that when the sample size n is
large relative to the number of variables p, fitting a logistic model by the method of maximum …
large relative to the number of variables p, fitting a logistic model by the method of maximum …
Learning without mixing: Towards a sharp analysis of linear system identification
We prove that the ordinary least-squares (OLS) estimator attains nearly minimax optimal
performance for the identification of linear dynamical systems from a single observed …
performance for the identification of linear dynamical systems from a single observed …
[KNIHA][B] Random matrix methods for machine learning
R Couillet, Z Liao - 2022 - books.google.com
This book presents a unified theory of random matrices for applications in machine learning,
offering a large-dimensional data vision that exploits concentration and universality …
offering a large-dimensional data vision that exploits concentration and universality …
Predicting what you already know helps: Provable self-supervised learning
Self-supervised representation learning solves auxiliary prediction tasks (known as pretext
tasks), that do not require labeled data, to learn semantic representations. These pretext …
tasks), that do not require labeled data, to learn semantic representations. These pretext …
Theoretical foundations of t-sne for visualizing high-dimensional clustered data
This paper investigates the theoretical foundations of the t-distributed stochastic neighbor
embedding (t-SNE) algorithm, a popular nonlinear dimension reduction and data …
embedding (t-SNE) algorithm, a popular nonlinear dimension reduction and data …