Enabling voice-accompanying hand-to-face gesture recognition with cross-device sensing
Gestures performed accompanying the voice are essential for voice interaction to convey
complementary semantics for interaction purposes such as wake-up state and input …
complementary semantics for interaction purposes such as wake-up state and input …
EarSE: Bringing Robust Speech Enhancement to COTS Headphones
Speech enhancement is regarded as the key to the quality of digital communication and is
gaining increasing attention in the research field of audio processing. In this paper, we …
gaining increasing attention in the research field of audio processing. In this paper, we …
Semantic hearing: Programming acoustic scenes with binaural hearables
Imagine being able to listen to the birds chir** in a park without hearing the chatter from
other hikers, or being able to block out traffic noise on a busy street while still being able to …
other hikers, or being able to block out traffic noise on a busy street while still being able to …
Ehtrack: Earphone-based head tracking via only acoustic signals
Head tracking is a technique that allows for the measurement and analysis of human focus
and attention, thus enhancing the experience of human–computer interaction (HCI) …
and attention, thus enhancing the experience of human–computer interaction (HCI) …
Gazereader: Detecting unknown word using webcam for english as a second language (esl) learners
Automatic unknown word detection techniques can enable new applications for assisting
English as a Second Language (ESL) learners, thus improving their reading experiences …
English as a Second Language (ESL) learners, thus improving their reading experiences …
MAF: Exploring Mobile Acoustic Field for Hand-to-Face Gesture Interactions
We present MAF, a novel acoustic sensing approach that leverages the commodity
hardware in bone conduction earphones for hand-to-face gesture interactions. Briefly, by …
hardware in bone conduction earphones for hand-to-face gesture interactions. Briefly, by …
On-Device Training Empowered Transfer Learning For Human Activity Recognition
Human Activity Recognition (HAR) is an attractive topic to perceive human behavior and
supplying assistive services. Besides the classical inertial unit and vision-based HAR …
supplying assistive services. Besides the classical inertial unit and vision-based HAR …
MMTSA: Multi-Modal Temporal Segment Attention Network for Efficient Human Activity Recognition
Multimodal sensors provide complementary information to develop accurate machine-
learning methods for human activity recognition (HAR), but introduce significantly higher …
learning methods for human activity recognition (HAR), but introduce significantly higher …
The EarSAVAS Dataset: Enabling Subject-Aware Vocal Activity Sensing on Earables
Subject-aware vocal activity sensing on wearables, which specifically recognizes and
monitors the wearer's distinct vocal activities, is essential in advancing personal health …
monitors the wearer's distinct vocal activities, is essential in advancing personal health …
G-VOILA: Gaze-Facilitated Information Querying in Daily Scenarios
Modern information querying systems are progressively incorporating multimodal inputs like
vision and audio. However, the integration of gaze---a modality deeply linked to user intent …
vision and audio. However, the integration of gaze---a modality deeply linked to user intent …