Generating human motion from textual descriptions with discrete representations

J Zhang, Y Zhang, X Cun, Y Zhang… - Proceedings of the …, 2023‏ - openaccess.thecvf.com
In this work, we investigate a simple and must-known conditional generative framework
based on Vector Quantised-Variational AutoEncoder (VQ-VAE) and Generative Pre-trained …

Gesturediffuclip: Gesture diffusion model with clip latents

T Ao, Z Zhang, L Liu - ACM Transactions on Graphics (TOG), 2023‏ - dl.acm.org
The automatic generation of stylized co-speech gestures has recently received increasing
attention. Previous systems typically allow style control via predefined text labels or example …

Motionlcm: Real-time controllable motion generation via latent consistency model

W Dai, LH Chen, J Wang, J Liu, B Dai… - European Conference on …, 2024‏ - Springer
This work introduces MotionLCM, extending controllable motion generation to a real-time
level. Existing methods for spatial-temporal control in text-conditioned motion generation …

Synthesizing diverse human motions in 3d indoor scenes

K Zhao, Y Zhang, S Wang… - Proceedings of the …, 2023‏ - openaccess.thecvf.com
We present a novel method for populating 3D indoor scenes with virtual humans that can
navigate in the environment and interact with objects in a realistic manner. Existing …

SINC: Spatial composition of 3D human motions for simultaneous action generation

N Athanasiou, M Petrovich… - Proceedings of the …, 2023‏ - openaccess.thecvf.com
Our goal is to synthesize 3D human motions given textual inputs describing simultaneous
actions, for examplewaving hand'whilewalking'at the same time. We refer to generating such …

Controlvae: Model-based learning of generative controllers for physics-based characters

H Yao, Z Song, B Chen, L Liu - ACM Transactions on Graphics (TOG), 2022‏ - dl.acm.org
In this paper, we introduce ControlVAE, a novel model-based framework for learning
generative motion control policies based on variational autoencoders (VAE). Our framework …

Single motion diffusion

S Raab, I Leibovitch, G Tevet, M Arar… - arxiv preprint arxiv …, 2023‏ - arxiv.org
Synthesizing realistic animations of humans, animals, and even imaginary creatures, has
long been a goal for artists and computer graphics professionals. Compared to the imaging …

Virtual instrument performances (vip): A comprehensive review

T Kyriakou, MÁ de la Campa Crespo… - Computer Graphics …, 2024‏ - Wiley Online Library
Driven by recent advancements in Extended Reality (XR), the hype around the Metaverse,
and real‐time computer graphics, the transformation of the performing arts, particularly in …

Generating human motion in 3D scenes from text descriptions

Z Cen, H Pi, S Peng, Z Shen, M Yang… - Proceedings of the …, 2024‏ - openaccess.thecvf.com
Generating human motions from textual descriptions has gained growing research interest
due to its wide range of applications. However only a few works consider human-scene …

Emotional speech-driven 3d body animation via disentangled latent diffusion

K Chhatre, N Athanasiou, G Becherini… - Proceedings of the …, 2024‏ - openaccess.thecvf.com
Existing methods for synthesizing 3D human gestures from speech have shown promising
results but they do not explicitly model the impact of emotions on the generated gestures …