Executing your commands via motion diffusion in latent space

X Chen, B Jiang, W Liu, Z Huang… - Proceedings of the …, 2023 - openaccess.thecvf.com
We study a challenging task, conditional human motion generation, which produces
plausible human motion sequences according to various conditional inputs, such as action …

TEMOS: Generating Diverse Human Motions from Textual Descriptions

M Petrovich, MJ Black, G Varol - European Conference on Computer …, 2022 - Springer
We address the problem of generating diverse 3D human motions from textual descriptions.
This challenging task requires joint modeling of both modalities: understanding and …

Motion-x: A large-scale 3d expressive whole-body human motion dataset

J Lin, A Zeng, S Lu, Y Cai, R Zhang… - Advances in Neural …, 2023 - proceedings.neurips.cc
In this paper, we present Motion-X, a large-scale 3D expressive whole-body motion dataset.
Existing motion datasets predominantly contain body-only poses, lacking facial expressions …

Action-conditioned 3d human motion synthesis with transformer vae

M Petrovich, MJ Black, G Varol - Proceedings of the IEEE …, 2021 - openaccess.thecvf.com
We tackle the problem of action-conditioned generation of realistic and diverse human
motion sequences. In contrast to methods that complete, or extend, motion sequences, this …

Guided motion diffusion for controllable human motion synthesis

K Karunratanakul, K Preechakul… - Proceedings of the …, 2023 - openaccess.thecvf.com
Denoising diffusion models have shown great promise in human motion synthesis
conditioned on natural language descriptions. However, integrating spatial constraints, such …

Teach: Temporal action composition for 3d humans

N Athanasiou, M Petrovich, MJ Black… - … Conference on 3D …, 2022 - ieeexplore.ieee.org
Given a series of natural language descriptions, our task is to generate 3D human motions
that correspond semantically to the text, and follow the temporal order of the instructions. In …

Motionlcm: Real-time controllable motion generation via latent consistency model

W Dai, LH Chen, J Wang, J Liu, B Dai… - European Conference on …, 2024 - Springer
This work introduces MotionLCM, extending controllable motion generation to a real-time
level. Existing methods for spatial-temporal control in text-conditioned motion generation …

Emdm: Efficient motion diffusion model for fast and high-quality motion generation

W Zhou, Z Dou, Z Cao, Z Liao, J Wang, W Wang… - … on Computer Vision, 2024 - Springer
Abstract We introduce Efficient Motion Diffusion Model (EMDM) for fast and high-quality
human motion generation. Current state-of-the-art generative diffusion models have …

TMR: Text-to-motion retrieval using contrastive 3D human motion synthesis

M Petrovich, MJ Black, G Varol - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
In this paper, we present TMR, a simple yet effective approach for text to 3D human motion
retrieval. While previous work has only treated retrieval as a proxy evaluation metric, we …

Tlcontrol: Trajectory and language control for human motion synthesis

W Wan, Z Dou, T Komura, W Wang… - … on Computer Vision, 2024 - Springer
Controllable human motion synthesis is essential for applications in AR/VR, gaming and
embodied AI. Existing methods often focus solely on either language or full trajectory control …