Recent advances in deep learning based dialogue systems: A systematic survey

J Ni, T Young, V Pandelea, F Xue… - Artificial intelligence review, 2023 - Springer
Dialogue systems are a popular natural language processing (NLP) task as it is promising in
real-life applications. It is also a complicated task since many NLP tasks deserving study are …

Red teaming language models with language models

E Perez, S Huang, F Song, T Cai, R Ring… - arxiv preprint arxiv …, 2022 - arxiv.org
Language Models (LMs) often cannot be deployed because of their potential to harm users
in hard-to-predict ways. Prior work identifies harmful behaviors before deployment by using …

Neural text generation with unlikelihood training

S Welleck, I Kulikov, S Roller, E Dinan, K Cho… - arxiv preprint arxiv …, 2019 - arxiv.org
Neural text generation is a key tool in natural language applications, but it is well known
there are major problems at its core. In particular, standard likelihood training and decoding …

Open-domain conversational agents: Current progress, open problems, and future directions

S Roller, YL Boureau, J Weston, A Bordes… - arxiv preprint arxiv …, 2020 - arxiv.org
We present our view of what is necessary to build an engaging open-domain conversational
agent: covering the qualities of such an agent, the pieces of the puzzle that have been built …

CLIFF: Contrastive learning for improving faithfulness and factuality in abstractive summarization

S Cao, L Wang - arxiv preprint arxiv:2109.09209, 2021 - arxiv.org
We study generating abstractive summaries that are faithful and factually consistent with the
given articles. A novel contrastive learning formulation is presented, which leverages both …

Contrastive learning reduces hallucination in conversations

W Sun, Z Shi, S Gao, P Ren, M de Rijke… - Proceedings of the AAAI …, 2023 - ojs.aaai.org
Pre-trained language models (LMs) store knowledge in their parameters and can generate
informative responses when used in conversational systems. However, LMs suffer from the …

Don't say that! making inconsistent dialogue unlikely with unlikelihood training

M Li, S Roller, I Kulikov, S Welleck, YL Boureau… - arxiv preprint arxiv …, 2019 - arxiv.org
Generative dialogue models currently suffer from a number of problems which standard
maximum likelihood training does not address. They tend to produce generations that (i) rely …

Caire: An end-to-end empathetic chatbot

Z Lin, P Xu, GI Winata, FB Siddique, Z Liu, J Shin… - Proceedings of the AAAI …, 2020 - aaai.org
We present CAiRE, an end-to-end generative empathetic chatbot designed to recognize
user emotions and respond in an empathetic manner. Our system adapts the Generative Pre …

The cringe loss: Learning what language not to model

L Adolphs, T Gao, J Xu, K Shuster… - arxiv preprint arxiv …, 2022 - arxiv.org
Standard language model training employs gold human documents or human-human
interaction data, and treats all training data as positive examples. Growing evidence shows …

Deep learning for dialogue systems: Chit-chat and beyond

R Yan, J Li, Z Yu - Foundations and Trends® in Information …, 2022 - nowpublishers.com
With the rapid progress of deep neural models and the explosion of available data
resources, dialogue systems that supports extensive topics and chit-chat conversations are …