How important is the train-validation split in meta-learning?

Y Bai, M Chen, P Zhou, T Zhao, J Lee… - International …, 2021 - proceedings.mlr.press
Meta-learning aims to perform fast adaptation on a new task through learning a “prior” from
multiple existing tasks. A common practice in meta-learning is to perform a train-validation …

Learning-to-learn stochastic gradient descent with biased regularization

G Denevi, C Ciliberto, R Grazzi… - … on Machine Learning, 2019 - proceedings.mlr.press
We study the problem of learning-to-learn: infer-ring a learning algorithm that works well on
a family of tasks sampled from an unknown distribution. As class of algorithms we consider …

Efficient meta learning via minibatch proximal update

P Zhou, X Yuan, H Xu, S Yan… - Advances in Neural …, 2019 - proceedings.neurips.cc
We address the problem of meta-learning which learns a prior over hypothesis from a
sample of meta-training tasks for fast adaptation on meta-testing tasks. A particularly simple …

Maml and anil provably learn representations

L Collins, A Mokhtari, S Oh… - … on Machine Learning, 2022 - proceedings.mlr.press
Recent empirical evidence has driven conventional wisdom to believe that gradient-based
meta-learning (GBML) methods perform well at few-shot learning because they learn an …