A survey of controllable text generation using transformer-based pre-trained language models

H Zhang, H Song, S Li, M Zhou, D Song - ACM Computing Surveys, 2023 - dl.acm.org
Controllable Text Generation (CTG) is an emerging area in the field of natural language
generation (NLG). It is regarded as crucial for the development of advanced text generation …

Large language model for table processing: A survey

W Lu, J Zhang, J Fan, Z Fu, Y Chen, X Du - Frontiers of Computer Science, 2025 - Springer
Tables, typically two-dimensional and structured to store large amounts of data, are
essential in daily activities like database queries, spreadsheet manipulations, Web table …

Finetuned language models are zero-shot learners

J Wei, M Bosma, VY Zhao, K Guu, AW Yu… - arxiv preprint arxiv …, 2021 - arxiv.org
This paper explores a simple method for improving the zero-shot learning abilities of
language models. We show that instruction tuning--finetuning language models on a …

Lora: Low-rank adaptation of large language models

EJ Hu, Y Shen, P Wallis, Z Allen-Zhu, Y Li… - arxiv preprint arxiv …, 2021 - arxiv.org
An important paradigm of natural language processing consists of large-scale pre-training
on general domain data and adaptation to particular tasks or domains. As we pre-train larger …

Adapting large language models via reading comprehension

D Cheng, S Huang, F Wei - The Twelfth International Conference on …, 2023 - openreview.net
We explore how continued pre-training on domain-specific corpora influences large
language models, revealing that training on the raw corpora endows the model with domain …

Prefix-tuning: Optimizing continuous prompts for generation

XL Li, P Liang - arxiv preprint arxiv:2101.00190, 2021 - arxiv.org
Fine-tuning is the de facto way to leverage large pretrained language models to perform
downstream tasks. However, it modifies all the language model parameters and therefore …

Large language models can be strong differentially private learners

X Li, F Tramer, P Liang, T Hashimoto - arxiv preprint arxiv:2110.05679, 2021 - arxiv.org
Differentially Private (DP) learning has seen limited success for building large deep learning
models of text, and straightforward attempts at applying Differentially Private Stochastic …

Differentially private fine-tuning of language models

D Yu, S Naik, A Backurs, S Gopi, HA Inan… - arxiv preprint arxiv …, 2021 - arxiv.org
We give simpler, sparser, and faster algorithms for differentially private fine-tuning of large-
scale pre-trained language models, which achieve the state-of-the-art privacy versus utility …

Spot: Better frozen model adaptation through soft prompt transfer

T Vu, B Lester, N Constant, R Al-Rfou, D Cer - arxiv preprint arxiv …, 2021 - arxiv.org
There has been growing interest in parameter-efficient methods to apply pre-trained
language models to downstream tasks. Building on the Prompt Tuning approach of Lester et …

Ext5: Towards extreme multi-task scaling for transfer learning

V Aribandi, Y Tay, T Schuster, J Rao, HS Zheng… - arxiv preprint arxiv …, 2021 - arxiv.org
Despite the recent success of multi-task learning and transfer learning for natural language
processing (NLP), few works have systematically studied the effect of scaling up the number …