On the computational efficiency of training neural networks

R Livni, S Shalev-Shwartz… - Advances in neural …, 2014 - proceedings.neurips.cc
It is well-known that neural networks are computationally hard to train. On the other hand, in
practice, modern day neural networks are trained efficiently using SGD and a variety of tricks …

Statistical query lower bounds for robust estimation of high-dimensional gaussians and gaussian mixtures

I Diakonikolas, DM Kane… - 2017 IEEE 58th Annual …, 2017 - ieeexplore.ieee.org
We describe a general technique that yields the first Statistical Query lower bounds for a
range of fundamental high-dimensional learning problems involving Gaussian distributions …

Finite-sample analysis of interpolating linear classifiers in the overparameterized regime

NS Chatterji, PM Long - Journal of Machine Learning Research, 2021 - jmlr.org
We prove bounds on the population risk of the maximum margin algorithm for two-class
linear classification. For linearly separable training data, the maximum margin algorithm has …

Theory of disagreement-based active learning

S Hanneke - Foundations and Trends® in Machine Learning, 2014 - nowpublishers.com
Active learning is a protocol for supervised machine learning, in which a learning algorithm
sequentially requests the labels of selected data points from a large pool of unlabeled data …

Margin based active learning

MF Balcan, A Broder, T Zhang - International Conference on …, 2007 - Springer
We present a framework for margin based active learning of linear separators. We
instantiate it for a few important cases, some of which have been previously considered in …

Efficient algorithms for outlier-robust regression

A Klivans, PK Kothari, R Meka - Conference On Learning …, 2018 - proceedings.mlr.press
We give the first polynomial-time algorithm for performing linear or polynomial regression
resilient to adversarial corruptions in both examples and labels. Given a sufficiently large …

Near-optimal cryptographic hardness of agnostically learning halfspaces and relu regression under gaussian marginals

I Diakonikolas, D Kane, L Ren - International Conference on …, 2023 - proceedings.mlr.press
We study the task of agnostically learning halfspaces under the Gaussian distribution.
Specifically, given labeled examples $(\\mathbf {x}, y) $ from an unknown distribution on …

The power of localization for efficiently learning linear separators with noise

P Awasthi, MF Balcan, PM Long - Journal of the ACM (JACM), 2017 - dl.acm.org
We introduce a new approach for designing computationally efficient learning algorithms
that are tolerant to noise, and we demonstrate its effectiveness by designing algorithms with …

Near-optimal sq lower bounds for agnostically learning halfspaces and relus under gaussian marginals

I Diakonikolas, D Kane, N Zarifis - Advances in Neural …, 2020 - proceedings.neurips.cc
We study the fundamental problems of agnostically learning halfspaces and ReLUs under
Gaussian marginals. In the former problem, given labeled examples $(\bx, y) $ from an …

Provably efficient, succinct, and precise explanations

G Blanc, J Lange, LY Tan - Advances in Neural Information …, 2021 - proceedings.neurips.cc
We consider the problem of explaining the predictions of an arbitrary blackbox model $ f $:
given query access to $ f $ and an instance $ x $, output a small set of $ x $'s features that in …