Muteit: Jaw motion based unvoiced command recognition using earable

T Srivastava, P Khanna, S Pan, P Nguyen… - Proceedings of the ACM …, 2022 - dl.acm.org
In this paper, we present MuteIt, an ear-worn system for recognizing unvoiced human
commands. MuteIt presents an intuitive alternative to voice-based interactions that can be …

Deep speech synthesis from articulatory representations

P Wu, S Watanabe, L Goldstein, AW Black… - ar**
TG Csapó, G Gosztolya, L Tóth, AH Shandiz, A Markó - Sensors, 2022 - mdpi.com
Within speech processing, articulatory-to-acoustic map** (AAM) methods can apply
ultrasound tongue imaging (UTI) as an input.(Micro) convex transducers are mostly used …

Style modeling for multi-speaker articulation-to-speech

M Kim, Z Piao, J Lee, HG Kang - ICASSP 2023-2023 IEEE …, 2023 - ieeexplore.ieee.org
In this paper, we propose a neural articulation-to-speech (ATS) framework that synthesizes
high-quality speech from articulatory signal in a multi-speaker situation. Most conventional …

Reconstructing speech from real-time articulatory MRI using neural vocoders

Y Yu, AH Shandiz, L Tóth - 2021 29th European Signal …, 2021 - ieeexplore.ieee.org
Several approaches exist for the recording of articulatory movements, such as
eletromagnetic and permanent magnetic articulagraphy, ultrasound tongue imaging and …

[PDF][PDF] Speech synthesis from intracranial stereotactic Electroencephalography using a neural vocoder.

FV Arthur, TG Csapó - Infocommunications Journal, 2024 - infocommunications.hu
Speech is one of the most important human biosig-nals. However, only some speech
production characteristics are fully understood, which are required for a successful …