Deep learning for text style transfer: A survey

D **, Z **, Z Hu, O Vechtomova… - Computational …, 2022 - direct.mit.edu
Text style transfer is an important task in natural language generation, which aims to control
certain attributes in the generated text, such as politeness, emotion, humor, and many …

EmpDG: Multiresolution interactive empathetic dialogue generation

Q Li, H Chen, Z Ren, P Ren, Z Tu, Z Chen - arxiv preprint arxiv …, 2019 - arxiv.org
A humanized dialogue system is expected to generate empathetic replies, which should be
sensitive to the users' expressed emotion. The task of empathetic dialogue generation is …

Continual learning for text classification with information disentanglement based regularization

Y Huang, Y Zhang, J Chen, X Wang, D Yang - arxiv preprint arxiv …, 2021 - arxiv.org
Continual learning has become increasingly important as it enables NLP models to
constantly learn and gain knowledge over time. Previous continual learning methods are …

Exploring controllable text generation techniques

S Prabhumoye, AW Black, R Salakhutdinov - arxiv preprint arxiv …, 2020 - arxiv.org
Neural controllable text generation is an important area gaining attention due to its plethora
of applications. Although there is a large body of prior work in controllable text generation …

DP-VAE: Human-readable text anonymization for online reviews with differentially private variational autoencoders

B Weggenmann, V Rublack, M Andrejczuk… - Proceedings of the …, 2022 - dl.acm.org
While vast amounts of personal data are shared daily on public online platforms and used
by companies and analysts to gain valuable insights, privacy concerns are also on the rise …

Generating syntactically controlled paraphrases without using annotated parallel pairs

KH Huang, KW Chang - arxiv preprint arxiv:2101.10579, 2021 - arxiv.org
Paraphrase generation plays an essential role in natural language process (NLP), and it has
many downstream applications. However, training supervised paraphrase models requires …

Lottery ticket adaptation: Mitigating destructive interference in llms

A Panda, B Isik, X Qi, S Koyejo, T Weissman… - arxiv preprint arxiv …, 2024 - arxiv.org
Existing methods for adapting large language models (LLMs) to new tasks are not suited to
multi-task adaptation because they modify all the model weights--causing destructive …

Scalable language model with generalized continual learning

B Peng, Z Tian, S Liu, M Yang, J Jia - arxiv preprint arxiv:2404.07470, 2024 - arxiv.org
Continual learning has gained increasing importance as it facilitates the acquisition and
refinement of scalable knowledge and skills in language models. However, existing …

Task-agnostic low-rank adapters for unseen English dialects

Z **ao, W Held, Y Liu, D Yang - arxiv preprint arxiv:2311.00915, 2023 - arxiv.org
Large Language Models (LLMs) are trained on corpora disproportionally weighted in favor
of Standard American English. As a result, speakers of other dialects experience …

On learning and representing social meaning in NLP: a sociolinguistic perspective

D Nguyen, L Rosseel, J Grieve - … of the 2021 Conference of the …, 2021 - aclanthology.org
The field of NLP has made substantial progress in building meaning representations.
However, an important aspect of linguistic meaning, social meaning, has been largely …