Soda: Million-scale dialogue distillation with social commonsense contextualization

H Kim, J Hessel, L Jiang, P West, X Lu, Y Yu… - arxiv preprint arxiv …, 2022 - arxiv.org
We present SODA: the first publicly available, million-scale high-quality social dialogue
dataset. Using SODA, we train COSMO: a generalizable conversation agent outperforming …

Evaluating human-language model interaction

M Lee, M Srivastava, A Hardy, J Thickstun… - arxiv preprint arxiv …, 2022 - arxiv.org
Many real-world applications of language models (LMs), such as writing assistance and
code autocomplete, involve human-LM interaction. However, most benchmarks are non …

Commonsense reasoning for conversational ai: A survey of the state of the art

C Richardson, L Heck - arxiv preprint arxiv:2302.07926, 2023 - arxiv.org
Large, transformer-based pretrained language models like BERT, GPT, and T5 have
demonstrated a deep understanding of contextual semantics and language syntax. Their …

Using in-context learning to improve dialogue safety

N Meade, S Gella, D Hazarika, P Gupta, D **… - arxiv preprint arxiv …, 2023 - arxiv.org
While large neural-based conversational models have become increasingly proficient
dialogue agents, recent work has highlighted safety issues with these systems. For example …

Think before you speak: Explicitly generating implicit commonsense knowledge for response generation

P Zhou, K Gopalakrishnan, B Hedayatnia, S Kim… - arxiv preprint arxiv …, 2021 - arxiv.org
Implicit knowledge, such as common sense, is key to fluid human conversations. Current
neural response generation (RG) models are trained to generate responses directly …

Cosplay: Concept set guided personalized dialogue generation across both party personas

C Xu, P Li, W Wang, H Yang, S Wang… - Proceedings of the 45th …, 2022 - dl.acm.org
Maintaining a consistent persona is essential for building a human-like conversational
model. However, the lack of attention to the partner makes the model more egocentric: they …

Reflect, not reflex: Inference-based common ground improves dialogue response quality

P Zhou, H Cho, P Jandaghi, DH Lee, BY Lin… - arxiv preprint arxiv …, 2022 - arxiv.org
Human communication relies on common ground (CG), the mutual knowledge and beliefs
shared by participants, to produce coherent and interesting conversations. In this paper, we …

Lawyers are dishonest? quantifying representational harms in commonsense knowledge resources

N Mehrabi, P Zhou, F Morstatter, J Pujara… - arxiv preprint arxiv …, 2021 - arxiv.org
Warning: this paper contains content that may be offensive or upsetting. Numerous natural
language processing models have tried injecting commonsense by using the ConceptNet …

Survey on knowledge distillation for large language models: methods, evaluation, and application

C Yang, Y Zhu, W Lu, Y Wang, Q Chen, C Gao… - ACM Transactions on …, 2024 - dl.acm.org
Large Language Models (LLMs) have showcased exceptional capabilities in various
domains, attracting significant interest from both academia and industry. Despite their …

Target-guided dialogue response generation using commonsense and data augmentation

P Gupta, H Jhamtani, JP Bigham - arxiv preprint arxiv:2205.09314, 2022 - arxiv.org
Target-guided response generation enables dialogue systems to smoothly transition a
conversation from a dialogue context toward a target sentence. Such control is useful for …