Deep neural networks for multiple speaker detection and localization

W He, P Motlicek, JM Odobez - 2018 IEEE International …, 2018 - ieeexplore.ieee.org
We propose to use neural networks for simultaneous detection and localization of multiple
sound sources in human-robot interaction. In contrast to conventional signal processing …

Combining dynamic head pose–gaze map** with the robot conversational state for attention recognition in human–robot interactions

S Sheikhi, JM Odobez - Pattern Recognition Letters, 2015 - Elsevier
The ability to recognize the visual focus of attention (VFOA, ie what or whom a person is
looking at) of people is important for robots or conversational agents interacting with multiple …

Underworlds: Cascading situation assessment for robots

S Lemaignan, Y Sallami, C Wallbridge… - 2018 IEEE/RSJ …, 2018 - ieeexplore.ieee.org
We introduce UNDERWORLDS, a novel lightweight framework for cascading spatio-
temporal situation assessment in robotics. UNDERWORLDS allows programmers to …

Overworld: Assessing the geometry of the world for Human-Robot Interaction

G Sarthou - IEEE Robotics and Automation Letters, 2023 - ieeexplore.ieee.org
For a robot to interact with humans in a given environment, a key need is to understand its
environment in terms of the objects composing it, the other agents acting in it, and the …

Development of acoustic source localization with adaptive neural network using distance mating‐based red deer algorithm

E Bharat Babu, DH Krishna, SM Hussain… - Computational …, 2023 - Wiley Online Library
Multichannel, audio processing approaches are widely examined in human–computer
interaction, autonomous robots, audio surveillance, and teleconferencing systems. The …

How to make a robot guide?

A Mayima, G Sarthou, G Buisan… - International Symposium …, 2023 - Springer
We present a service robot able to provide directions to people in a natural way, taking into
account and adapting to humans' perspectives. Like a human hel** someone to find their …

The MuMMER data set for robot perception in multi-party HRI scenarios

O Canévet, W He, P Motlicek… - 2020 29th IEEE …, 2020 - ieeexplore.ieee.org
This paper presents the MuMMER data set, a data set for human-robot interaction scenarios
that is available for research purposes 1. It comprises 1h 29 min of multimodal recordings of …

Leveraging the robot dialog state for visual focus of attention recognition

S Sheikhi, V Khalidov, D Klotz, B Wrede… - Proceedings of the 15th …, 2013 - dl.acm.org
The Visual Focus of Attention (what or whom a person is looking at) or VFOA is a
fundamental cue in non-verbal communication and plays an important role when designing …

Deep Learning Approaches for Auditory Perception in Robotics

W He - 2021 - infoscience.epfl.ch
Auditory perception is an essential part of a robotic system in Human-Robot Interaction
(HRI), and creating an artificial auditory perception system that is on par with human has …