A survey on video diffusion models
The recent wave of AI-generated content (AIGC) has witnessed substantial success in
computer vision, with the diffusion model playing a crucial role in this achievement. Due to …
computer vision, with the diffusion model playing a crucial role in this achievement. Due to …
State of the art on diffusion models for visual computing
The field of visual computing is rapidly advancing due to the emergence of generative
artificial intelligence (AI), which unlocks unprecedented capabilities for the generation …
artificial intelligence (AI), which unlocks unprecedented capabilities for the generation …
Dynamicrafter: Animating open-domain images with video diffusion priors
Animating a still image offers an engaging visual experience. Traditional image animation
techniques mainly focus on animating natural scenes with stochastic dynamics (eg clouds …
techniques mainly focus on animating natural scenes with stochastic dynamics (eg clouds …
Uni-controlnet: All-in-one control to text-to-image diffusion models
Text-to-Image diffusion models have made tremendous progress over the past two years,
enabling the generation of highly realistic images based on open-domain text descriptions …
enabling the generation of highly realistic images based on open-domain text descriptions …
DriveDreamer: Towards Real-World-Drive World Models for Autonomous Driving
World models, especially in autonomous driving, are trending and drawing extensive
attention due to their capacity for comprehending driving environments. The established …
attention due to their capacity for comprehending driving environments. The established …
Lumiere: A space-time diffusion model for video generation
We introduce Lumiere–a text-to-video diffusion model designed for synthesizing videos that
portray realistic, diverse and coherent motion–a pivotal challenge in video synthesis. To this …
portray realistic, diverse and coherent motion–a pivotal challenge in video synthesis. To this …
Motionctrl: A unified and flexible motion controller for video generation
Motions in a video primarily consist of camera motion, induced by camera movement, and
object motion, resulting from object movement. Accurate control of both camera and object …
object motion, resulting from object movement. Accurate control of both camera and object …
Modelscope text-to-video technical report
This paper introduces ModelScopeT2V, a text-to-video synthesis model that evolves from a
text-to-image synthesis model (ie, Stable Diffusion). ModelScopeT2V incorporates spatio …
text-to-image synthesis model (ie, Stable Diffusion). ModelScopeT2V incorporates spatio …
Sparsectrl: Adding sparse controls to text-to-video diffusion models
The development of text-to-video (T2V), ie, generating videos with a given text prompt, has
been significantly advanced in recent years. However, relying solely on text prompts often …
been significantly advanced in recent years. However, relying solely on text prompts often …
Animate anyone: Consistent and controllable image-to-video synthesis for character animation
L Hu - Proceedings of the IEEE/CVF Conference on …, 2024 - openaccess.thecvf.com
Character Animation aims to generating character videos from still images through driving
signals. Currently diffusion models have become the mainstream in visual generation …
signals. Currently diffusion models have become the mainstream in visual generation …