M-face: An appearance-based photorealistic model for multiple facial attributes rendering

Y Fu, N Zheng - IEEE Transactions on Circuits and Systems for …, 2006 - ieeexplore.ieee.org
A novel framework for appearance-based photorealistic facial modeling, called Merging
Face (M-Face), is presented and applied to generate emotional facial attributes in rotated …

[PDF][PDF] Real-time synthesis of Chinese visual speech and facial expressions using MPEG-4 FAP features in a three-dimensional avatar.

Z Wu, S Zhang, L Cai, HM Meng - Interspeech, 2006 - hcsi.cs.tsinghua.edu.cn
This paper describes our initial work in develo** a real-time audio-visual Chinese speech
synthesizer with a 3D expressive avatar. The avatar model is parameterized according to the …

HMM trajectory-guided sample selection for photo-realistic talking head

L Wang, FK Soong - Multimedia Tools and Applications, 2015 - Springer
In this paper, we propose an HMM trajectory-guided, real image sample concatenation
approach to photo-realistic talking head synthesis. An audio-visual database of a person is …

[PDF][PDF] Development of automatic speech recognition and synthesis technologies to support Chinese learners of English: The CUHK experience

H Meng, WK Lo, AM Harrison, P Lee, KH Wong… - Proc. APSIPA …, 2010 - se.cuhk.edu.hk
This paper presents our group's ongoing research in the area of computer-aided
pronunciation training (CAPT) for Chinese learners of English. Our goal is to develop …

Realistic visual speech synthesis based on hybrid concatenation method

J Tao, L ** for Lithuanian speech animation
I Mazonaviciute, R Bausys - Elektronika ir Elektrotechnika, 2011 - eejournal.ktu.lt
Methodology of Lithuanian phonemes visualization using visemes set of the base language
appended by new visemes defined to animate specific Lithuanian phonemes is proposed …

Emotional audio visual Arabic text to speech

M Abou Zliekha, S Al-Moubayed… - 2006 14th European …, 2006 - ieeexplore.ieee.org
The goal of this paper is to present an emotional audio-visual Text to speech system for the
Arabic Language. The system is based on two entities: un emotional audio text to speech …

Dynamic map** method based speech driven face animation system

P Yin, J Tao - International Conference on Affective Computing and …, 2005 - Springer
In the paper, we design and develop a speech driven face animation system based on the
dynamic map** method. The face animation is synthesized by the unit concatenating, and …

Expressive face animation synthesis based on dynamic map** method

P Yin, L Zhao, L Huang, J Tao - International Conference on Affective …, 2007 - Springer
In the paper, we present a framework of speech driven face animation system with
expressions. It systematically addresses audio-visual data acquisition, expressive trajectory …