Soda: Million-scale dialogue distillation with social commonsense contextualization
We present SODA: the first publicly available, million-scale high-quality social dialogue
dataset. Using SODA, we train COSMO: a generalizable conversation agent outperforming …
dataset. Using SODA, we train COSMO: a generalizable conversation agent outperforming …
Evaluating human-language model interaction
Many real-world applications of language models (LMs), such as writing assistance and
code autocomplete, involve human-LM interaction. However, most benchmarks are non …
code autocomplete, involve human-LM interaction. However, most benchmarks are non …
Commonsense reasoning for conversational ai: A survey of the state of the art
Large, transformer-based pretrained language models like BERT, GPT, and T5 have
demonstrated a deep understanding of contextual semantics and language syntax. Their …
demonstrated a deep understanding of contextual semantics and language syntax. Their …
Using in-context learning to improve dialogue safety
While large neural-based conversational models have become increasingly proficient
dialogue agents, recent work has highlighted safety issues with these systems. For example …
dialogue agents, recent work has highlighted safety issues with these systems. For example …
Think before you speak: Explicitly generating implicit commonsense knowledge for response generation
Implicit knowledge, such as common sense, is key to fluid human conversations. Current
neural response generation (RG) models are trained to generate responses directly …
neural response generation (RG) models are trained to generate responses directly …
Cosplay: Concept set guided personalized dialogue generation across both party personas
Maintaining a consistent persona is essential for building a human-like conversational
model. However, the lack of attention to the partner makes the model more egocentric: they …
model. However, the lack of attention to the partner makes the model more egocentric: they …
Reflect, not reflex: Inference-based common ground improves dialogue response quality
Human communication relies on common ground (CG), the mutual knowledge and beliefs
shared by participants, to produce coherent and interesting conversations. In this paper, we …
shared by participants, to produce coherent and interesting conversations. In this paper, we …
Lawyers are dishonest? quantifying representational harms in commonsense knowledge resources
Warning: this paper contains content that may be offensive or upsetting. Numerous natural
language processing models have tried injecting commonsense by using the ConceptNet …
language processing models have tried injecting commonsense by using the ConceptNet …
Survey on knowledge distillation for large language models: methods, evaluation, and application
Large Language Models (LLMs) have showcased exceptional capabilities in various
domains, attracting significant interest from both academia and industry. Despite their …
domains, attracting significant interest from both academia and industry. Despite their …
Target-guided dialogue response generation using commonsense and data augmentation
Target-guided response generation enables dialogue systems to smoothly transition a
conversation from a dialogue context toward a target sentence. Such control is useful for …
conversation from a dialogue context toward a target sentence. Such control is useful for …