A Comprehensive Review of Data‐Driven Co‐Speech Gesture Generation

S Nyatsanga, T Kucherenko, C Ahuja… - Computer Graphics …, 2023 - Wiley Online Library
Gestures that accompany speech are an essential part of natural and efficient embodied
human communication. The automatic generation of such co‐speech gestures is a long …

Beat: A large-scale semantic and emotional multi-modal dataset for conversational gestures synthesis

H Liu, Z Zhu, N Iwamoto, Y Peng, Z Li, Y Zhou… - European conference on …, 2022 - Springer
Achieving realistic, vivid, and human-like synthesized conversational gestures conditioned
on multi-modal data is still an unsolved problem due to the lack of available datasets …

The GENEA Challenge 2022: A large evaluation of data-driven co-speech gesture generation

Y Yoon, P Wolfert, T Kucherenko, C Viegas… - Proceedings of the …, 2022 - dl.acm.org
This paper reports on the second GENEA Challenge to benchmark data-driven automatic co-
speech gesture generation. Participating teams used the same speech and motion dataset …

Qpgesture: Quantization-based and phase-guided motion matching for natural speech-driven gesture generation

S Yang, Z Wu, M Li, Z Zhang, L Hao… - Proceedings of the …, 2023 - openaccess.thecvf.com
Speech-driven gesture generation is highly challenging due to the random jitters of human
motion. In addition, there is an inherent asynchronous relationship between human speech …

A motion matching-based framework for controllable gesture synthesis from speech

I Habibie, M Elgharib, K Sarkar, A Abdullah… - ACM SIGGRAPH 2022 …, 2022 - dl.acm.org
Recent deep learning-based approaches have shown promising results for synthesizing
plausible 3D human gestures from speech input. However, these approaches typically offer …

The GENEA Challenge 2023: A large-scale evaluation of gesture generation models in monadic and dyadic settings

T Kucherenko, R Nagy, Y Yoon, J Woo… - Proceedings of the 25th …, 2023 - dl.acm.org
This paper reports on the GENEA Challenge 2023, in which participating teams built speech-
driven gesture-generation systems using the same speech and motion dataset, followed by …

The design and observed effects of robot-performed manual gestures: A systematic review

J De Wit, P Vogt, E Krahmer - ACM Transactions on Human-Robot …, 2023 - dl.acm.org
Communication using manual (hand) gestures is considered a defining property of social
robots, and their physical embodiment and presence, therefore, we see a need for a …

Disco: Disentangled implicit content and rhythm learning for diverse co-speech gestures synthesis

H Liu, N Iwamoto, Z Zhu, Z Li, Y Zhou… - Proceedings of the 30th …, 2022 - dl.acm.org
Current co-speech gestures synthesis methods struggle with generating diverse motions
and typically collapse to single or few frequent motion sequences, which are trained on …

Gesturemaster: Graph-based speech-driven gesture generation

C Zhou, T Bian, K Chen - … of the 2022 International Conference on …, 2022 - dl.acm.org
This paper describes the GestureMaster entry to the GENEA (Generation and Evaluation of
Non-verbal Behaviour for Embodied Agents) Challenge 2022. Given speech audio and text …

Diff-TTSG: Denoising probabilistic integrated speech and gesture synthesis

S Mehta, S Wang, S Alexanderson, J Beskow… - arxiv preprint arxiv …, 2023 - arxiv.org
With read-aloud speech synthesis achieving high naturalness scores, there is a growing
research interest in synthesising spontaneous speech. However, human spontaneous face …