A theoretical perspective on hyperdimensional computing
Hyperdimensional (HD) computing is a set of neurally inspired methods for obtaining
highdimensional, low-precision, distributed representations of data. These representations …
highdimensional, low-precision, distributed representations of data. These representations …
Concentration inequalities for statistical inference
H Zhang, SX Chen - arxiv preprint arxiv:2011.02258, 2020 - arxiv.org
This paper gives a review of concentration inequalities which are widely employed in non-
asymptotical analyses of mathematical statistics in a wide range of settings, from distribution …
asymptotical analyses of mathematical statistics in a wide range of settings, from distribution …
An efficient framework for clustered federated learning
We address the problem of Federated Learning (FL) where users are distributed and
partitioned into clusters. This setup captures settings where different groups of users have …
partitioned into clusters. This setup captures settings where different groups of users have …
Benign overfitting in linear regression
The phenomenon of benign overfitting is one of the key mysteries uncovered by deep
learning methodology: deep neural networks seem to predict well, even with a perfect fit to …
learning methodology: deep neural networks seem to predict well, even with a perfect fit to …
Predicting what you already know helps: Provable self-supervised learning
Self-supervised representation learning solves auxiliary prediction tasks (known as pretext
tasks), that do not require labeled data, to learn semantic representations. These pretext …
tasks), that do not require labeled data, to learn semantic representations. These pretext …
[책][B] High-dimensional probability: An introduction with applications in data science
R Vershynin - 2018 - books.google.com
High-dimensional probability offers insight into the behavior of random vectors, random
matrices, random subspaces, and objects used to quantify uncertainty in high dimensions …
matrices, random subspaces, and objects used to quantify uncertainty in high dimensions …
Learning without mixing: Towards a sharp analysis of linear system identification
We prove that the ordinary least-squares (OLS) estimator attains nearly minimax optimal
performance for the identification of linear dynamical systems from a single observed …
performance for the identification of linear dynamical systems from a single observed …
A modern maximum-likelihood theory for high-dimensional logistic regression
Students in statistics or data science usually learn early on that when the sample size n is
large relative to the number of variables p, fitting a logistic model by the method of maximum …
large relative to the number of variables p, fitting a logistic model by the method of maximum …
Naive exploration is optimal for online lqr
We consider the problem of online adaptive control of the linear quadratic regulator, where
the true system parameters are unknown. We prove new upper and lower bounds …
the true system parameters are unknown. We prove new upper and lower bounds …
Approximate residual balancing: debiased inference of average treatment effects in high dimensions
There are many settings where researchers are interested in estimating average treatment
effects and are willing to rely on the unconfoundedness assumption, which requires that the …
effects and are willing to rely on the unconfoundedness assumption, which requires that the …