Paradigm shift in natural language processing
In the era of deep learning, modeling for most natural language processing (NLP) tasks has
converged into several mainstream paradigms. For example, we usually adopt the …
converged into several mainstream paradigms. For example, we usually adopt the …
Discovering latent knowledge in language models without supervision
Existing techniques for training language models can be misaligned with the truth: if we train
models with imitation learning, they may reproduce errors that humans make; if we train …
models with imitation learning, they may reproduce errors that humans make; if we train …
Generating training data with language models: Towards zero-shot language understanding
Pretrained language models (PLMs) have demonstrated remarkable performance in various
natural language processing tasks: Unidirectional PLMs (eg, GPT) are well known for their …
natural language processing tasks: Unidirectional PLMs (eg, GPT) are well known for their …
Parameter-efficient multi-task fine-tuning for transformers via shared hypernetworks
State-of-the-art parameter-efficient fine-tuning methods rely on introducing adapter modules
between the layers of a pretrained language model. However, such modules are trained …
between the layers of a pretrained language model. However, such modules are trained …
Differentiable prompt makes pre-trained language models better few-shot learners
Large-scale pre-trained language models have contributed significantly to natural language
processing by demonstrating remarkable abilities as few-shot learners. However, their …
processing by demonstrating remarkable abilities as few-shot learners. However, their …
Entailment as few-shot learner
Large pre-trained language models (LMs) have demonstrated remarkable ability as few-shot
learners. However, their success hinges largely on scaling model parameters to a degree …
learners. However, their success hinges largely on scaling model parameters to a degree …
Crossfit: A few-shot learning challenge for cross-task generalization in nlp
Humans can learn a new language task efficiently with only few examples, by leveraging
their knowledge obtained when learning prior tasks. In this paper, we explore whether and …
their knowledge obtained when learning prior tasks. In this paper, we explore whether and …
Label verbalization and entailment for effective zero-and few-shot relation extraction
Relation extraction systems require large amounts of labeled examples which are costly to
annotate. In this work we reformulate relation extraction as an entailment task, with simple …
annotate. In this work we reformulate relation extraction as an entailment task, with simple …
State-of-the-art generalisation research in NLP: a taxonomy and review
The ability to generalise well is one of the primary desiderata of natural language
processing (NLP). Yet, what'good generalisation'entails and how it should be evaluated is …
processing (NLP). Yet, what'good generalisation'entails and how it should be evaluated is …
Ontology-enhanced Prompt-tuning for Few-shot Learning
Few-shot Learning (FSL) is aimed to make predictions based on a limited number of
samples. Structured data such as knowledge graphs and ontology libraries has been …
samples. Structured data such as knowledge graphs and ontology libraries has been …