Recent advances in natural language processing via large pre-trained language models: A survey
Large, pre-trained language models (PLMs) such as BERT and GPT have drastically
changed the Natural Language Processing (NLP) field. For numerous NLP tasks …
changed the Natural Language Processing (NLP) field. For numerous NLP tasks …
Conversational agents in therapeutic interventions for neurodevelopmental disorders: a survey
Neurodevelopmental Disorders (NDD) are a group of conditions with onset in the
developmental period characterized by deficits in the cognitive and social areas …
developmental period characterized by deficits in the cognitive and social areas …
A survey of large language models
Language is essentially a complex, intricate system of human expressions governed by
grammatical rules. It poses a significant challenge to develop capable AI algorithms for …
grammatical rules. It poses a significant challenge to develop capable AI algorithms for …
Multi-concept customization of text-to-image diffusion
While generative models produce high-quality images of concepts learned from a large-
scale database, a user often wishes to synthesize instantiations of their own concepts (for …
scale database, a user often wishes to synthesize instantiations of their own concepts (for …
Federatedscope-llm: A comprehensive package for fine-tuning large language models in federated learning
Large language models (LLMs) have demonstrated great capabilities in various natural
language understanding and generation tasks. These pre-trained LLMs can be further …
language understanding and generation tasks. These pre-trained LLMs can be further …
The power of scale for parameter-efficient prompt tuning
In this work, we explore" prompt tuning", a simple yet effective mechanism for learning" soft
prompts" to condition frozen language models to perform specific downstream tasks. Unlike …
prompts" to condition frozen language models to perform specific downstream tasks. Unlike …
Adapterhub: A framework for adapting transformers
The current modus operandi in NLP involves downloading and fine-tuning pre-trained
models consisting of millions or billions of parameters. Storing and sharing such large …
models consisting of millions or billions of parameters. Storing and sharing such large …
[PDF][PDF] XHate-999: Analyzing and detecting abusive language across domains and languages
We present XHATE-999, a multi-domain and multilingual evaluation data set for abusive
language detection. By aligning test instances across six typologically diverse languages …
language detection. By aligning test instances across six typologically diverse languages …
Adapterfusion: Non-destructive task composition for transfer learning
Sequential fine-tuning and multi-task learning are methods aiming to incorporate knowledge
from multiple tasks; however, they suffer from catastrophic forgetting and difficulties in …
from multiple tasks; however, they suffer from catastrophic forgetting and difficulties in …
Parameter-efficient transfer learning with diff pruning
While task-specific finetuning of pretrained networks has led to significant empirical
advances in NLP, the large size of networks makes finetuning difficult to deploy in multi-task …
advances in NLP, the large size of networks makes finetuning difficult to deploy in multi-task …