A survey of controllable text generation using transformer-based pre-trained language models

H Zhang, H Song, S Li, M Zhou, D Song - ACM Computing Surveys, 2023‏ - dl.acm.org
Controllable Text Generation (CTG) is an emerging area in the field of natural language
generation (NLG). It is regarded as crucial for the development of advanced text generation …

Fairness in deep learning: A survey on vision and language research

O Parraga, MD More, CM Oliveira, NS Gavenski… - ACM Computing …, 2025‏ - dl.acm.org
Despite being responsible for state-of-the-art results in several computer vision and natural
language processing tasks, neural networks have faced harsh criticism due to some of their …

Deep bidirectional language-knowledge graph pretraining

M Yasunaga, A Bosselut, H Ren… - Advances in …, 2022‏ - proceedings.neurips.cc
Pretraining a language model (LM) on text has been shown to help various downstream
NLP tasks. Recent works show that a knowledge graph (KG) can complement text data …

Towards understanding and mitigating social biases in language models

PP Liang, C Wu, LP Morency… - … on machine learning, 2021‏ - proceedings.mlr.press
As machine learning methods are deployed in real-world settings such as healthcare, legal
systems, and social science, it is crucial to recognize how they shape social biases and …

Gender and representation bias in GPT-3 generated stories

L Lucy, D Bamman - Proceedings of the third workshop on …, 2021‏ - aclanthology.org
Using topic modeling and lexicon-based word similarity, we find that stories generated by
GPT-3 exhibit many known gender stereotypes. Generated stories depict different topics and …

Refiner: Reasoning feedback on intermediate representations

D Paul, M Ismayilzada, M Peyrard, B Borges… - arxiv preprint arxiv …, 2023‏ - arxiv.org
Language models (LMs) have recently shown remarkable performance on reasoning tasks
by explicitly generating intermediate inferences, eg, chain-of-thought prompting. However …

Greaselm: Graph reasoning enhanced language models for question answering

X Zhang, A Bosselut, M Yasunaga, H Ren… - arxiv preprint arxiv …, 2022‏ - arxiv.org
Answering complex questions about textual narratives requires reasoning over both stated
context and the world knowledge that underlies it. However, pretrained language models …

DExperts: Decoding-time controlled text generation with experts and anti-experts

A Liu, M Sap, X Lu, S Swayamdipta… - arxiv preprint arxiv …, 2021‏ - arxiv.org
Despite recent advances in natural language generation, it remains challenging to control
attributes of generated text. We propose DExperts: Decoding-time Experts, a decoding-time …