Tailor: Generating and perturbing text with semantic controls

A Ross, T Wu, H Peng, ME Peters… - arxiv preprint arxiv …, 2021 - arxiv.org
Controlled text perturbation is useful for evaluating and improving model generalizability.
However, current techniques rely on training a model for every target perturbation, which is …

Mvp: Multi-task supervised pre-training for natural language generation

T Tang, J Li, WX Zhao, JR Wen - arxiv preprint arxiv:2206.12131, 2022 - arxiv.org
Pre-trained language models (PLMs) have achieved remarkable success in natural
language generation (NLG) tasks. Up to now, most NLG-oriented PLMs are pre-trained in an …

On the evaluation metrics for paraphrase generation

L Shen, L Liu, H Jiang, S Shi - arxiv preprint arxiv:2202.08479, 2022 - arxiv.org
In this paper we revisit automatic metrics for paraphrase evaluation and obtain two findings
that disobey conventional wisdom:(1) Reference-free metrics achieve better performance …

Are personalized stochastic parrots more dangerous? evaluating persona biases in dialogue systems

Y Wan, J Zhao, A Chadha, N Peng… - arxiv preprint arxiv …, 2023 - arxiv.org
Recent advancements in Large Language Models empower them to follow freeform
instructions, including imitating generic or specific demographic personas in conversations …

Bite: Textual backdoor attacks with iterative trigger injection

J Yan, V Gupta, X Ren - arxiv preprint arxiv:2205.12700, 2022 - arxiv.org
Backdoor attacks have become an emerging threat to NLP systems. By providing poisoned
training data, the adversary can embed a" backdoor" into the victim model, which allows …

Gcpg: A general framework for controllable paraphrase generation

K Yang, D Liu, W Lei, B Yang, H Zhang… - Findings of the …, 2022 - aclanthology.org
Controllable paraphrase generation (CPG) incorporates various external conditions to
obtain desirable paraphrases. However, existing works only highlight a special condition …

Paraphrase Types for Generation and Detection

JP Wahle, B Gipp, T Ruas - arxiv preprint arxiv:2310.14863, 2023 - arxiv.org
Current approaches in paraphrase generation and detection heavily rely on a single general
similarity score, ignoring the intricate linguistic properties of language. This paper introduces …

Textbox 2.0: A text generation library with pre-trained language models

T Tang, J Li, Z Chen, Y Hu, Z Yu, W Dai, Z Dong… - arxiv preprint arxiv …, 2022 - arxiv.org
To facilitate research on text generation, this paper presents a comprehensive and unified
library, TextBox 2.0, focusing on the use of pre-trained language models (PLMs). To be …

HypoGen: Hyperbole generation with commonsense and counterfactual knowledge

Y Tian, N Peng - arxiv preprint arxiv:2109.05097, 2021 - arxiv.org
A hyperbole is an intentional and creative exaggeration not to be taken literally. Despite its
ubiquity in daily life, the computational explorations of hyperboles are scarce. In this paper …

A large-scale computational study of content preservation measures for text style transfer and paraphrase generation

N Babakov, D Dale, V Logacheva… - Proceedings of the 60th …, 2022 - aclanthology.org
Text style transfer and paraphrasing of texts are actively growing areas of NLP, dozens of
methods for solving these tasks have been recently introduced. In both tasks, the system is …