A survey of natural language generation

C Dong, Y Li, H Gong, M Chen, J Li, Y Shen… - ACM Computing …, 2022 - dl.acm.org
This article offers a comprehensive review of the research on Natural Language Generation
(NLG) over the past two decades, especially in relation to data-to-text generation and text-to …

Cyber-aggression, cyberbullying, and cyber-grooming: A survey and research challenges

M Mladenović, V Ošmjanski, SV Stanković - ACM Computing Surveys …, 2021 - dl.acm.org
Cyber-aggression, cyberbullying, and cyber-grooming are distinctive and similar
phenomena that represent the objectionable content appearing on online social media …

Challenges in detoxifying language models

J Welbl, A Glaese, J Uesato, S Dathathri… - arxiv preprint arxiv …, 2021 - arxiv.org
Large language models (LM) generate remarkably fluent text and can be efficiently adapted
across NLP tasks. Measuring and guaranteeing the quality of generated text in terms of …

Deep learning for text style transfer: A survey

D **, Z **, Z Hu, O Vechtomova… - Computational …, 2022 - direct.mit.edu
Text style transfer is an important task in natural language generation, which aims to control
certain attributes in the generated text, such as politeness, emotion, humor, and many …

Reformulating unsupervised style transfer as paraphrase generation

K Krishna, J Wieting, M Iyyer - arxiv preprint arxiv:2010.05700, 2020 - arxiv.org
Modern NLP defines the task of style transfer as modifying the style of a given sentence
without appreciably changing its semantics, which implies that the outputs of style transfer …

Risk taxonomy, mitigation, and assessment benchmarks of large language model systems

T Cui, Y Wang, C Fu, Y **ao, S Li, X Deng, Y Liu… - arxiv preprint arxiv …, 2024 - arxiv.org
Large language models (LLMs) have strong capabilities in solving diverse natural language
processing tasks. However, the safety and security issues of LLM systems have become the …

Disentangled representation learning for non-parallel text style transfer

V John, L Mou, H Bahuleyan, O Vechtomova - arxiv preprint arxiv …, 2018 - arxiv.org
This paper tackles the problem of disentangling the latent variables of style and content in
language models. We propose a simple yet effective approach, which incorporates auxiliary …

HopeEDI: A multilingual hope speech detection dataset for equality, diversity, and inclusion

BR Chakravarthi - Proceedings of the Third Workshop on …, 2020 - aclanthology.org
Over the past few years, systems have been developed to control online content and
eliminate abusive, offensive or hate speech content. However, people in power sometimes …

Paradetox: Detoxification with parallel data

V Logacheva, D Dementieva… - Proceedings of the …, 2022 - aclanthology.org
We present a novel pipeline for the collection of parallel data for the detoxification task. We
collect non-toxic paraphrases for over 10,000 English toxic sentences. We also show that …

Style transformer: Unpaired text style transfer without disentangled latent representation

N Dai, J Liang, X Qiu, X Huang - arxiv preprint arxiv:1905.05621, 2019 - arxiv.org
Disentangling the content and style in the latent space is prevalent in unpaired text style
transfer. However, two major issues exist in most of the current neural models. 1) It is difficult …