A survey of controllable text generation using transformer-based pre-trained language models

H Zhang, H Song, S Li, M Zhou, D Song - ACM Computing Surveys, 2023 - dl.acm.org
Controllable Text Generation (CTG) is an emerging area in the field of natural language
generation (NLG). It is regarded as crucial for the development of advanced text generation …

A survey of natural language generation

C Dong, Y Li, H Gong, M Chen, J Li, Y Shen… - ACM Computing …, 2022 - dl.acm.org
This article offers a comprehensive review of the research on Natural Language Generation
(NLG) over the past two decades, especially in relation to data-to-text generation and text-to …

Pretraining language models with human preferences

T Korbak, K Shi, A Chen, RV Bhalerao… - International …, 2023 - proceedings.mlr.press
Abstract Language models (LMs) are pretrained to imitate text from large and diverse
datasets that contain content that would violate human preferences if generated by an LM …

Is chatgpt a good nlg evaluator? a preliminary study

J Wang, Y Liang, F Meng, Z Sun, H Shi, Z Li… - arxiv preprint arxiv …, 2023 - arxiv.org
Recently, the emergence of ChatGPT has attracted wide attention from the computational
linguistics community. Many prior studies have shown that ChatGPT achieves remarkable …

Lamda: Language models for dialog applications

R Thoppilan, D De Freitas, J Hall, N Shazeer… - arxiv preprint arxiv …, 2022 - arxiv.org
We present LaMDA: Language Models for Dialog Applications. LaMDA is a family of
Transformer-based neural language models specialized for dialog, which have up to 137B …

Measuring and narrowing the compositionality gap in language models

O Press, M Zhang, S Min, L Schmidt, NA Smith… - arxiv preprint arxiv …, 2022 - arxiv.org
We investigate the ability of language models to perform compositional reasoning tasks
where the overall solution depends on correctly composing the answers to sub-problems …

Coauthor: Designing a human-ai collaborative writing dataset for exploring language model capabilities

M Lee, P Liang, Q Yang - Proceedings of the 2022 CHI conference on …, 2022 - dl.acm.org
Large language models (LMs) offer unprecedented language generation capabilities and
exciting opportunities for interaction design. However, their highly context-dependent …

Human–AI collaboration enables more empathic conversations in text-based peer-to-peer mental health support

A Sharma, IW Lin, AS Miner, DC Atkins… - Nature Machine …, 2023 - nature.com
Advances in artificial intelligence (AI) are enabling systems that augment and collaborate
with humans to perform simple, mechanistic tasks such as scheduling meetings and …

Contrastive decoding: Open-ended text generation as optimization

XL Li, A Holtzman, D Fried, P Liang, J Eisner… - arxiv preprint arxiv …, 2022 - arxiv.org
Given a language model (LM), maximum probability is a poor decoding objective for open-
ended generation, because it produces short and repetitive text. On the other hand …

Calibrate before use: Improving few-shot performance of language models

Z Zhao, E Wallace, S Feng, D Klein… - … on machine learning, 2021 - proceedings.mlr.press
GPT-3 can perform numerous tasks when provided a natural language prompt that contains
a few training examples. We show that this type of few-shot learning can be unstable: the …