A comprehensive survey of few-shot learning: Evolution, applications, challenges, and opportunities

Y Song, T Wang, P Cai, SK Mondal… - ACM Computing Surveys, 2023‏ - dl.acm.org
Few-shot learning (FSL) has emerged as an effective learning method and shows great
potential. Despite the recent creative works in tackling FSL tasks, learning valid information …

P-tuning v2: Prompt tuning can be comparable to fine-tuning universally across scales and tasks

X Liu, K Ji, Y Fu, WL Tam, Z Du, Z Yang… - arxiv preprint arxiv …, 2021‏ - arxiv.org
Prompt tuning, which only tunes continuous prompts with a frozen language model,
substantially reduces per-task storage and memory usage at training. However, in the …

[HTML][HTML] GPT understands, too

X Liu, Y Zheng, Z Du, M Ding, Y Qian, Z Yang, J Tang - AI Open, 2024‏ - Elsevier
Prompting a pretrained language model with natural language patterns has been proved
effective for natural language understanding (NLU). However, our preliminary study reveals …

Glue-x: Evaluating natural language understanding models from an out-of-distribution generalization perspective

L Yang, S Zhang, L Qin, Y Li, Y Wang, H Liu… - arxiv preprint arxiv …, 2022‏ - arxiv.org
Pre-trained language models (PLMs) are known to improve the generalization performance
of natural language understanding models by leveraging large amounts of data during the …

A survey on stability of learning with limited labelled data and its sensitivity to the effects of randomness

B Pecher, I Srba, M Bielikova - ACM Computing Surveys, 2024‏ - dl.acm.org
Learning with limited labelled data, such as prompting, in-context learning, fine-tuning, meta-
learning, or few-shot learning, aims to effectively train a model using only a small amount of …

Flex: Unifying evaluation for few-shot nlp

J Bragg, A Cohan, K Lo… - Advances in neural …, 2021‏ - proceedings.neurips.cc
Few-shot NLP research is highly active, yet conducted in disjoint research threads with
evaluation suites that lack challenging-yet-realistic testing setups and fail to employ careful …

What are the best systems? new perspectives on nlp benchmarking

P Colombo, N Noiry, E Irurozki… - Advances in neural …, 2022‏ - proceedings.neurips.cc
Abstract In Machine Learning, a benchmark refers to an ensemble of datasets associated
with one or multiple metrics together with a way to aggregate different systems …

LINGUIST: Language model instruction tuning to generate annotated utterances for intent classification and slot tagging

A Rosenbaum, S Soltan, W Hamza, Y Versley… - arxiv preprint arxiv …, 2022‏ - arxiv.org
We present LINGUIST, a method for generating annotated data for Intent Classification and
Slot Tagging (IC+ ST), via fine-tuning AlexaTM 5B, a 5-billion-parameter multilingual …

Zero-and few-shot nlp with pretrained language models

I Beltagy, A Cohan, R Logan IV, S Min… - Proceedings of the 60th …, 2022‏ - aclanthology.org
The ability to efficiently learn from little-to-no data is critical to applying NLP to tasks where
data collection is costly or otherwise difficult. This is a challenging setting both academically …

MEAL: Stable and active learning for few-shot prompting

A Köksal, T Schick, H Schuetze - arxiv preprint arxiv:2211.08358, 2022‏ - arxiv.org
Few-shot classification has made great strides due to foundation models that, through
priming and prompting, are highly effective few-shot learners. However, this approach has …