Executing your commands via motion diffusion in latent space
We study a challenging task, conditional human motion generation, which produces
plausible human motion sequences according to various conditional inputs, such as action …
plausible human motion sequences according to various conditional inputs, such as action …
TEMOS: Generating Diverse Human Motions from Textual Descriptions
We address the problem of generating diverse 3D human motions from textual descriptions.
This challenging task requires joint modeling of both modalities: understanding and …
This challenging task requires joint modeling of both modalities: understanding and …
Motion-x: A large-scale 3d expressive whole-body human motion dataset
In this paper, we present Motion-X, a large-scale 3D expressive whole-body motion dataset.
Existing motion datasets predominantly contain body-only poses, lacking facial expressions …
Existing motion datasets predominantly contain body-only poses, lacking facial expressions …
Action-conditioned 3d human motion synthesis with transformer vae
We tackle the problem of action-conditioned generation of realistic and diverse human
motion sequences. In contrast to methods that complete, or extend, motion sequences, this …
motion sequences. In contrast to methods that complete, or extend, motion sequences, this …
Guided motion diffusion for controllable human motion synthesis
Denoising diffusion models have shown great promise in human motion synthesis
conditioned on natural language descriptions. However, integrating spatial constraints, such …
conditioned on natural language descriptions. However, integrating spatial constraints, such …
Teach: Temporal action composition for 3d humans
Given a series of natural language descriptions, our task is to generate 3D human motions
that correspond semantically to the text, and follow the temporal order of the instructions. In …
that correspond semantically to the text, and follow the temporal order of the instructions. In …
Motionlcm: Real-time controllable motion generation via latent consistency model
This work introduces MotionLCM, extending controllable motion generation to a real-time
level. Existing methods for spatial-temporal control in text-conditioned motion generation …
level. Existing methods for spatial-temporal control in text-conditioned motion generation …
Emdm: Efficient motion diffusion model for fast and high-quality motion generation
Abstract We introduce Efficient Motion Diffusion Model (EMDM) for fast and high-quality
human motion generation. Current state-of-the-art generative diffusion models have …
human motion generation. Current state-of-the-art generative diffusion models have …
TMR: Text-to-motion retrieval using contrastive 3D human motion synthesis
In this paper, we present TMR, a simple yet effective approach for text to 3D human motion
retrieval. While previous work has only treated retrieval as a proxy evaluation metric, we …
retrieval. While previous work has only treated retrieval as a proxy evaluation metric, we …
Tlcontrol: Trajectory and language control for human motion synthesis
Controllable human motion synthesis is essential for applications in AR/VR, gaming and
embodied AI. Existing methods often focus solely on either language or full trajectory control …
embodied AI. Existing methods often focus solely on either language or full trajectory control …