Memory bounds for continual learning
Continual learning, or lifelong learning, is a formidable current challenge to machine
learning. It requires the learner to solve a sequence of k different learning tasks, one after …
learning. It requires the learner to solve a sequence of k different learning tasks, one after …
Proper learning, Helly number, and an optimal SVM bound
The classical PAC sample complexity bounds are stated for any Empirical Risk Minimizer
(ERM) and contain an extra logarithmic factor $\log (1/\epsilon) $ which is known to be …
(ERM) and contain an extra logarithmic factor $\log (1/\epsilon) $ which is known to be …
Active learning with simple questions
K Vasilis, M Mingchen… - The Thirty Seventh Annual …, 2024 - proceedings.mlr.press
We consider an active learning setting where a learner is presented with a pool $ S $ of $ n
$ unlabeled examples belonging to a domain $\mathcal X $ and asks queries to find the …
$ unlabeled examples belonging to a domain $\mathcal X $ and asks queries to find the …
Metalearning with very few samples per task
Metalearning and multitask learning are two frameworks for solving a group of related
learning tasks more efficiently than we could hope to solve each of the individual tasks on …
learning tasks more efficiently than we could hope to solve each of the individual tasks on …
Online learning with set-valued feedback
We study a variant of online multiclass classification where the learner predicts a single
label but receives a\textit {set of labels} as feedback. In this model, the learner is penalized …
label but receives a\textit {set of labels} as feedback. In this model, the learner is penalized …
Collaborative top distribution identifications with limited interaction
We consider the following problem in this paper: given a set of n distributions, find the top-m
ones with the largest means. This problem is also called top-m arm identifications in the …
ones with the largest means. This problem is also called top-m arm identifications in the …
The communication complexity of optimization
We consider the communication complexity of a number of distributed optimization
problems. We start with the problem of solving a linear system. Suppose there is a …
problems. We start with the problem of solving a linear system. Suppose there is a …
On optimal learning under targeted data poisoning
Consider the task of learning a hypothesis class $\mathcal {H} $ in the presence of an
adversary that can replace up to an $\eta $ fraction of the examples in the training set with …
adversary that can replace up to an $\eta $ fraction of the examples in the training set with …
Space lower bounds for linear prediction in the streaming model
We show that fundamental learning tasks, such as finding an approximate linear separator
or linear regression, require memory at least\emph {quadratic} in the dimension, in a natural …
or linear regression, require memory at least\emph {quadratic} in the dimension, in a natural …
Distributional PAC-Learning from Nisan's Natural Proofs
A Karchmer - arxiv preprint arxiv:2310.03641, 2023 - arxiv.org
(Abridged) Carmosino et al.(2016) demonstrated that natural proofs of circuit lower bounds
for\Lambda imply efficient algorithms for learning\Lambda-circuits, but only over the uniform …
for\Lambda imply efficient algorithms for learning\Lambda-circuits, but only over the uniform …