How important is the train-validation split in meta-learning?
Meta-learning aims to perform fast adaptation on a new task through learning a “prior” from
multiple existing tasks. A common practice in meta-learning is to perform a train-validation …
multiple existing tasks. A common practice in meta-learning is to perform a train-validation …
Learning-to-learn stochastic gradient descent with biased regularization
We study the problem of learning-to-learn: infer-ring a learning algorithm that works well on
a family of tasks sampled from an unknown distribution. As class of algorithms we consider …
a family of tasks sampled from an unknown distribution. As class of algorithms we consider …
Efficient meta learning via minibatch proximal update
We address the problem of meta-learning which learns a prior over hypothesis from a
sample of meta-training tasks for fast adaptation on meta-testing tasks. A particularly simple …
sample of meta-training tasks for fast adaptation on meta-testing tasks. A particularly simple …
Maml and anil provably learn representations
Recent empirical evidence has driven conventional wisdom to believe that gradient-based
meta-learning (GBML) methods perform well at few-shot learning because they learn an …
meta-learning (GBML) methods perform well at few-shot learning because they learn an …