Recent advances in natural language processing via large pre-trained language models: A survey

B Min, H Ross, E Sulem, APB Veyseh… - ACM Computing …, 2023 - dl.acm.org
Large, pre-trained language models (PLMs) such as BERT and GPT have drastically
changed the Natural Language Processing (NLP) field. For numerous NLP tasks …

Conversational agents in therapeutic interventions for neurodevelopmental disorders: a survey

F Catania, M Spitale, F Garzotto - ACM Computing Surveys, 2023 - dl.acm.org
Neurodevelopmental Disorders (NDD) are a group of conditions with onset in the
developmental period characterized by deficits in the cognitive and social areas …

A survey of large language models

WX Zhao, K Zhou, J Li, T Tang, X Wang, Y Hou… - arxiv preprint arxiv …, 2023 - arxiv.org
Language is essentially a complex, intricate system of human expressions governed by
grammatical rules. It poses a significant challenge to develop capable AI algorithms for …

Multi-concept customization of text-to-image diffusion

N Kumari, B Zhang, R Zhang… - Proceedings of the …, 2023 - openaccess.thecvf.com
While generative models produce high-quality images of concepts learned from a large-
scale database, a user often wishes to synthesize instantiations of their own concepts (for …

Federatedscope-llm: A comprehensive package for fine-tuning large language models in federated learning

W Kuang, B Qian, Z Li, D Chen, D Gao, X Pan… - Proceedings of the 30th …, 2024 - dl.acm.org
Large language models (LLMs) have demonstrated great capabilities in various natural
language understanding and generation tasks. These pre-trained LLMs can be further …

The power of scale for parameter-efficient prompt tuning

B Lester, R Al-Rfou, N Constant - arxiv preprint arxiv:2104.08691, 2021 - arxiv.org
In this work, we explore" prompt tuning", a simple yet effective mechanism for learning" soft
prompts" to condition frozen language models to perform specific downstream tasks. Unlike …

Adapterhub: A framework for adapting transformers

J Pfeiffer, A Rücklé, C Poth, A Kamath, I Vulić… - arxiv preprint arxiv …, 2020 - arxiv.org
The current modus operandi in NLP involves downloading and fine-tuning pre-trained
models consisting of millions or billions of parameters. Storing and sharing such large …

[PDF][PDF] XHate-999: Analyzing and detecting abusive language across domains and languages

G Glavaš, M Karan, I Vulić - 2020 - madoc.bib.uni-mannheim.de
We present XHATE-999, a multi-domain and multilingual evaluation data set for abusive
language detection. By aligning test instances across six typologically diverse languages …

Adapterfusion: Non-destructive task composition for transfer learning

J Pfeiffer, A Kamath, A Rücklé, K Cho… - arxiv preprint arxiv …, 2020 - arxiv.org
Sequential fine-tuning and multi-task learning are methods aiming to incorporate knowledge
from multiple tasks; however, they suffer from catastrophic forgetting and difficulties in …

Parameter-efficient transfer learning with diff pruning

D Guo, AM Rush, Y Kim - arxiv preprint arxiv:2012.07463, 2020 - arxiv.org
While task-specific finetuning of pretrained networks has led to significant empirical
advances in NLP, the large size of networks makes finetuning difficult to deploy in multi-task …