Memory bounds for continual learning

X Chen, C Papadimitriou, B Peng - 2022 IEEE 63rd Annual …, 2022 - ieeexplore.ieee.org
Continual learning, or lifelong learning, is a formidable current challenge to machine
learning. It requires the learner to solve a sequence of k different learning tasks, one after …

Proper learning, Helly number, and an optimal SVM bound

O Bousquet, S Hanneke, S Moran… - … on Learning Theory, 2020 - proceedings.mlr.press
The classical PAC sample complexity bounds are stated for any Empirical Risk Minimizer
(ERM) and contain an extra logarithmic factor $\log (1/\epsilon) $ which is known to be …

Active learning with simple questions

K Vasilis, M Mingchen… - The Thirty Seventh Annual …, 2024 - proceedings.mlr.press
We consider an active learning setting where a learner is presented with a pool $ S $ of $ n
$ unlabeled examples belonging to a domain $\mathcal X $ and asks queries to find the …

Metalearning with very few samples per task

M Aliakbarpour, K Bairaktari, G Brown… - The Thirty Seventh …, 2024 - proceedings.mlr.press
Metalearning and multitask learning are two frameworks for solving a group of related
learning tasks more efficiently than we could hope to solve each of the individual tasks on …

Online learning with set-valued feedback

V Raman, U Subedi, A Tewari - The Thirty Seventh Annual …, 2024 - proceedings.mlr.press
We study a variant of online multiclass classification where the learner predicts a single
label but receives a\textit {set of labels} as feedback. In this model, the learner is penalized …

Collaborative top distribution identifications with limited interaction

N Karpov, Q Zhang, Y Zhou - 2020 IEEE 61st Annual …, 2020 - ieeexplore.ieee.org
We consider the following problem in this paper: given a set of n distributions, find the top-m
ones with the largest means. This problem is also called top-m arm identifications in the …

The communication complexity of optimization

SS Vempala, R Wang, DP Woodruff - … of the Fourteenth Annual ACM-SIAM …, 2020 - SIAM
We consider the communication complexity of a number of distributed optimization
problems. We start with the problem of solving a linear system. Suppose there is a …

On optimal learning under targeted data poisoning

S Hanneke, A Karbasi, M Mahmoody… - Advances in …, 2022 - proceedings.neurips.cc
Consider the task of learning a hypothesis class $\mathcal {H} $ in the presence of an
adversary that can replace up to an $\eta $ fraction of the examples in the training set with …

Space lower bounds for linear prediction in the streaming model

Y Dagan, G Kur, O Shamir - Conference on Learning Theory, 2019 - proceedings.mlr.press
We show that fundamental learning tasks, such as finding an approximate linear separator
or linear regression, require memory at least\emph {quadratic} in the dimension, in a natural …

Distributional PAC-Learning from Nisan's Natural Proofs

A Karchmer - arxiv preprint arxiv:2310.03641, 2023 - arxiv.org
(Abridged) Carmosino et al.(2016) demonstrated that natural proofs of circuit lower bounds
for\Lambda imply efficient algorithms for learning\Lambda-circuits, but only over the uniform …