Motionlcm: Real-time controllable motion generation via latent consistency model
This work introduces MotionLCM, extending controllable motion generation to a real-time
level. Existing methods for spatial-temporal control in text-conditioned motion generation …
level. Existing methods for spatial-temporal control in text-conditioned motion generation …
Emdm: Efficient motion diffusion model for fast and high-quality motion generation
Abstract We introduce Efficient Motion Diffusion Model (EMDM) for fast and high-quality
human motion generation. Current state-of-the-art generative diffusion models have …
human motion generation. Current state-of-the-art generative diffusion models have …
Tlcontrol: Trajectory and language control for human motion synthesis
Controllable human motion synthesis is essential for applications in AR/VR, gaming and
embodied AI. Existing methods often focus solely on either language or full trajectory control …
embodied AI. Existing methods often focus solely on either language or full trajectory control …
Taming diffusion probabilistic models for character control
We present a novel character control framework that effectively utilizes motion diffusion
probabilistic models to generate high-quality and diverse character animations, responding …
probabilistic models to generate high-quality and diverse character animations, responding …
Disentangled clothed avatar generation from text descriptions
In this paper, we introduce a novel text-to-avatar generation method that separately
generates the human body and the clothes and allows high-quality animation on the …
generates the human body and the clothes and allows high-quality animation on the …
Maskedmimic: Unified physics-based character control through masked motion inpainting
Crafting a single, versatile physics-based controller that can breathe life into interactive
characters across a wide spectrum of scenarios represents an exciting frontier in character …
characters across a wide spectrum of scenarios represents an exciting frontier in character …
Part123: part-aware 3d reconstruction from a single-view image
Recently, the emergence of diffusion models has opened up new opportunities for single-
view reconstruction. However, all the existing methods represent the target object as a …
view reconstruction. However, all the existing methods represent the target object as a …
Como: Controllable motion generation through language guided pose code editing
Text-to-motion models excel at efficient human motion generation, but existing approaches
lack fine-grained controllability over the generation process. Consequently, modifying subtle …
lack fine-grained controllability over the generation process. Consequently, modifying subtle …
Laserhuman: language-guided scene-aware human motion generation in free environment
Language-guided scene-aware human motion generation has great significance for
entertainment and robotics. In response to the limitations of existing datasets, we introduce …
entertainment and robotics. In response to the limitations of existing datasets, we introduce …
Synthesizing physically plausible human motions in 3d scenes
We present a physics-based character control framework for synthesizing human-scene
interactions. Recent advances adopt physics simulation to mitigate artifacts produced by …
interactions. Recent advances adopt physics simulation to mitigate artifacts produced by …