Motionlcm: Real-time controllable motion generation via latent consistency model

W Dai, LH Chen, J Wang, J Liu, B Dai… - European Conference on …, 2024 - Springer
This work introduces MotionLCM, extending controllable motion generation to a real-time
level. Existing methods for spatial-temporal control in text-conditioned motion generation …

Emdm: Efficient motion diffusion model for fast and high-quality motion generation

W Zhou, Z Dou, Z Cao, Z Liao, J Wang, W Wang… - … on Computer Vision, 2024 - Springer
Abstract We introduce Efficient Motion Diffusion Model (EMDM) for fast and high-quality
human motion generation. Current state-of-the-art generative diffusion models have …

Tlcontrol: Trajectory and language control for human motion synthesis

W Wan, Z Dou, T Komura, W Wang… - … on Computer Vision, 2024 - Springer
Controllable human motion synthesis is essential for applications in AR/VR, gaming and
embodied AI. Existing methods often focus solely on either language or full trajectory control …

Taming diffusion probabilistic models for character control

R Chen, M Shi, S Huang, P Tan, T Komura… - ACM SIGGRAPH 2024 …, 2024 - dl.acm.org
We present a novel character control framework that effectively utilizes motion diffusion
probabilistic models to generate high-quality and diverse character animations, responding …

Disentangled clothed avatar generation from text descriptions

J Wang, Y Liu, Z Dou, Z Yu, Y Liang, C Lin… - … on Computer Vision, 2024 - Springer
In this paper, we introduce a novel text-to-avatar generation method that separately
generates the human body and the clothes and allows high-quality animation on the …

Maskedmimic: Unified physics-based character control through masked motion inpainting

C Tessler, Y Guo, O Nabati, G Chechik… - ACM Transactions on …, 2024 - dl.acm.org
Crafting a single, versatile physics-based controller that can breathe life into interactive
characters across a wide spectrum of scenarios represents an exciting frontier in character …

Part123: part-aware 3d reconstruction from a single-view image

A Liu, C Lin, Y Liu, X Long, Z Dou, HX Guo… - ACM SIGGRAPH 2024 …, 2024 - dl.acm.org
Recently, the emergence of diffusion models has opened up new opportunities for single-
view reconstruction. However, all the existing methods represent the target object as a …

Como: Controllable motion generation through language guided pose code editing

Y Huang, W Wan, Y Yang, C Callison-Burch… - … on Computer Vision, 2024 - Springer
Text-to-motion models excel at efficient human motion generation, but existing approaches
lack fine-grained controllability over the generation process. Consequently, modifying subtle …

Laserhuman: language-guided scene-aware human motion generation in free environment

P Cong, Z Wang, Z Dou, Y Ren, W Yin, K Cheng… - arxiv preprint arxiv …, 2024 - arxiv.org
Language-guided scene-aware human motion generation has great significance for
entertainment and robotics. In response to the limitations of existing datasets, we introduce …

Synthesizing physically plausible human motions in 3d scenes

L Pan, J Wang, B Huang, J Zhang… - … Conference on 3D …, 2024 - ieeexplore.ieee.org
We present a physics-based character control framework for synthesizing human-scene
interactions. Recent advances adopt physics simulation to mitigate artifacts produced by …