A Bayesian theory of mind approach to modeling cooperation and communication
Abstract Language has been widely acknowledged as the benchmark of intelligence.
However, evidence from cognitive science shows that intelligent behaviors in robust social …
However, evidence from cognitive science shows that intelligent behaviors in robust social …
Yourefit: Embodied reference understanding with language and gesture
We study the machine's understanding of embodied reference: One agent uses both
language and gesture to refer to an object to another agent in a shared physical …
language and gesture to refer to an object to another agent in a shared physical …
Communicative learning: A unified learning formalism
In this article, we propose a communicative learning (CL) formalism that unifies existing
machine learning paradigms, such as passive learning, active learning, algorithmic …
machine learning paradigms, such as passive learning, active learning, algorithmic …
Pragmatic Instruction Following and Goal Assistance via Cooperative Language-Guided Inverse Planning
People often give instructions whose meaning is ambiguous without further context,
expecting that their actions or goals will disambiguate their intentions. How can we build …
expecting that their actions or goals will disambiguate their intentions. How can we build …
Intention beyond desire: Spontaneous intentional commitment regulates conflicting desires
The human mind is a mosaic composed of multiple selves with conflicting desires. How can
coherent actions emerge from such conflicts? Classical desire theory argues that rational …
coherent actions emerge from such conflicts? Classical desire theory argues that rational …
Patron: perspective-aware multitask model for referring expression grounding using embodied multimodal cues
Humans naturally use referring expressions with verbal utterances and nonverbal gestures
to refer to objects and events. As these referring expressions can be interpreted differently …
to refer to objects and events. As these referring expressions can be interpreted differently …
CAESAR: An embodied simulator for generating multimodal referring expression datasets
Humans naturally use verbal utterances and nonverbal gestures to refer to various objects
(known as $\textit {referring expressions} $) in different interactional scenarios. As collecting …
(known as $\textit {referring expressions} $) in different interactional scenarios. As collecting …
Exploring an imagined “we” in human collective hunting: Joint commitment within shared intentionality
Human collaboration often involves a decision to pursue one out of multiple comparable
goals, in which case it is challenging to remain committed to the same goal collectively …
goals, in which case it is challenging to remain committed to the same goal collectively …
Interactive inference: a multi-agent model of cooperative joint actions
We advance a novel computational model of multi-agent, cooperative joint actions that is
grounded in the cognitive framework of active inference. The model assumes that to solve a …
grounded in the cognitive framework of active inference. The model assumes that to solve a …
Human-robot interaction in a shared augmented reality workspace
We design and develop a new shared Augmented Reality (AR) workspace for Human-Robot
Interaction (HRI), which establishes a bi-directional communication between human agents …
Interaction (HRI), which establishes a bi-directional communication between human agents …