A theoretical perspective on hyperdimensional computing

A Thomas, S Dasgupta, T Rosing - Journal of Artificial Intelligence Research, 2021 - jair.org
Hyperdimensional (HD) computing is a set of neurally inspired methods for obtaining
highdimensional, low-precision, distributed representations of data. These representations …

Concentration inequalities for statistical inference

H Zhang, SX Chen - arxiv preprint arxiv:2011.02258, 2020 - arxiv.org
This paper gives a review of concentration inequalities which are widely employed in non-
asymptotical analyses of mathematical statistics in a wide range of settings, from distribution …

An efficient framework for clustered federated learning

A Ghosh, J Chung, D Yin… - Advances in Neural …, 2020 - proceedings.neurips.cc
We address the problem of Federated Learning (FL) where users are distributed and
partitioned into clusters. This setup captures settings where different groups of users have …

Benign overfitting in linear regression

PL Bartlett, PM Long, G Lugosi… - Proceedings of the …, 2020 - National Acad Sciences
The phenomenon of benign overfitting is one of the key mysteries uncovered by deep
learning methodology: deep neural networks seem to predict well, even with a perfect fit to …

Predicting what you already know helps: Provable self-supervised learning

JD Lee, Q Lei, N Saunshi… - Advances in Neural …, 2021 - proceedings.neurips.cc
Self-supervised representation learning solves auxiliary prediction tasks (known as pretext
tasks), that do not require labeled data, to learn semantic representations. These pretext …

[책][B] High-dimensional probability: An introduction with applications in data science

R Vershynin - 2018 - books.google.com
High-dimensional probability offers insight into the behavior of random vectors, random
matrices, random subspaces, and objects used to quantify uncertainty in high dimensions …

Learning without mixing: Towards a sharp analysis of linear system identification

M Simchowitz, H Mania, S Tu… - … On Learning Theory, 2018 - proceedings.mlr.press
We prove that the ordinary least-squares (OLS) estimator attains nearly minimax optimal
performance for the identification of linear dynamical systems from a single observed …

A modern maximum-likelihood theory for high-dimensional logistic regression

P Sur, EJ Candès - Proceedings of the National Academy of …, 2019 - National Acad Sciences
Students in statistics or data science usually learn early on that when the sample size n is
large relative to the number of variables p, fitting a logistic model by the method of maximum …

Naive exploration is optimal for online lqr

M Simchowitz, D Foster - International Conference on …, 2020 - proceedings.mlr.press
We consider the problem of online adaptive control of the linear quadratic regulator, where
the true system parameters are unknown. We prove new upper and lower bounds …

Approximate residual balancing: debiased inference of average treatment effects in high dimensions

S Athey, GW Imbens, S Wager - Journal of the Royal Statistical …, 2018 - academic.oup.com
There are many settings where researchers are interested in estimating average treatment
effects and are willing to rely on the unconfoundedness assumption, which requires that the …