Parameter-efficient fine-tuning methods for pretrained language models: A critical review and assessment

L Xu, H **e, SZJ Qin, X Tao, FL Wang - arxiv preprint arxiv:2312.12148, 2023 - arxiv.org
With the continuous growth in the number of parameters of transformer-based pretrained
language models (PLMs), particularly the emergence of large language models (LLMs) with …

Recent advances in natural language processing via large pre-trained language models: A survey

B Min, H Ross, E Sulem, APB Veyseh… - ACM Computing …, 2023 - dl.acm.org
Large, pre-trained language models (PLMs) such as BERT and GPT have drastically
changed the Natural Language Processing (NLP) field. For numerous NLP tasks …

[PDF][PDF] A survey of large language models

WX Zhao, K Zhou, J Li, T Tang… - arxiv preprint arxiv …, 2023 - paper-notes.zhjwpku.com
Ever since the Turing Test was proposed in the 1950s, humans have explored the mastering
of language intelligence by machine. Language is essentially a complex, intricate system of …

NusaCrowd: Open source initiative for Indonesian NLP resources

S Cahyawijaya, H Lovenia, AF Aji, GI Winata… - arxiv preprint arxiv …, 2022 - arxiv.org
We present NusaCrowd, a collaborative initiative to collect and unify existing resources for
Indonesian languages, including opening access to previously non-public resources …

Multi-concept customization of text-to-image diffusion

N Kumari, B Zhang, R Zhang… - Proceedings of the …, 2023 - openaccess.thecvf.com
While generative models produce high-quality images of concepts learned from a large-
scale database, a user often wishes to synthesize instantiations of their own concepts (for …

Federatedscope-llm: A comprehensive package for fine-tuning large language models in federated learning

W Kuang, B Qian, Z Li, D Chen, D Gao, X Pan… - Proceedings of the 30th …, 2024 - dl.acm.org
Large language models (LLMs) have demonstrated great capabilities in various natural
language understanding and generation tasks. These pre-trained LLMs can be further …

Llm-adapters: An adapter family for parameter-efficient fine-tuning of large language models

Z Hu, L Wang, Y Lan, W Xu, EP Lim, L Bing… - arxiv preprint arxiv …, 2023 - arxiv.org
The success of large language models (LLMs), like GPT-4 and ChatGPT, has led to the
development of numerous cost-effective and accessible alternatives that are created by …

SeamlessM4T: Massively Multilingual & Multimodal Machine Translation

L Barrault, YA Chung, MC Meglioli, D Dale… - arxiv preprint arxiv …, 2023 - arxiv.org
What does it take to create the Babel Fish, a tool that can help individuals translate speech
between any two languages? While recent breakthroughs in text-based models have …

The power of scale for parameter-efficient prompt tuning

B Lester, R Al-Rfou, N Constant - arxiv preprint arxiv:2104.08691, 2021 - arxiv.org
In this work, we explore" prompt tuning", a simple yet effective mechanism for learning" soft
prompts" to condition frozen language models to perform specific downstream tasks. Unlike …

Using natural language processing to support peer‐feedback in the age of artificial intelligence: A cross‐disciplinary framework and a research agenda

E Bauer, M Greisel, I Kuznetsov… - British Journal of …, 2023 - Wiley Online Library
Advancements in artificial intelligence are rapidly increasing. The new‐generation large
language models, such as ChatGPT and GPT‐4, bear the potential to transform educational …