A survey on video diffusion models

Z **ng, Q Feng, H Chen, Q Dai, H Hu, H Xu… - ACM Computing …, 2024 - dl.acm.org
The recent wave of AI-generated content (AIGC) has witnessed substantial success in
computer vision, with the diffusion model playing a crucial role in this achievement. Due to …

Vbench: Comprehensive benchmark suite for video generative models

Z Huang, Y He, J Yu, F Zhang, C Si… - Proceedings of the …, 2024 - openaccess.thecvf.com
Video generation has witnessed significant advancements yet evaluating these models
remains a challenge. A comprehensive evaluation benchmark for video generation is …

Dynamicrafter: Animating open-domain images with video diffusion priors

J **ng, M **a, Y Zhang, H Chen, W Yu, H Liu… - … on Computer Vision, 2024 - Springer
Animating a still image offers an engaging visual experience. Traditional image animation
techniques mainly focus on animating natural scenes with stochastic dynamics (eg clouds …

I2vgen-xl: High-quality image-to-video synthesis via cascaded diffusion models

S Zhang, J Wang, Y Zhang, K Zhao, H Yuan… - arxiv preprint arxiv …, 2023 - arxiv.org
Video synthesis has recently made remarkable strides benefiting from the rapid
development of diffusion models. However, it still encounters challenges in terms of …

Cameractrl: Enabling camera control for text-to-video generation

H He, Y Xu, Y Guo, G Wetzstein, B Dai, H Li… - arxiv preprint arxiv …, 2024 - arxiv.org
Controllability plays a crucial role in video generation since it allows users to create desired
content. However, existing models largely overlooked the precise control of camera pose …

Tooncrafter: Generative cartoon interpolation

J **ng, H Liu, M **a, Y Zhang, X Wang, Y Shan… - ACM Transactions on …, 2024 - dl.acm.org
We introduce ToonCrafter, a novel approach that transcends traditional correspondence-
based cartoon video interpolation, paving the way for generative interpolation. Traditional …

Instructvideo: Instructing video diffusion models with human feedback

H Yuan, S Zhang, X Wang, Y Wei… - Proceedings of the …, 2024 - openaccess.thecvf.com
Diffusion models have emerged as the de facto paradigm for video generation. However
their reliance on web-scale data of varied quality often yields results that are visually …

Diffusion model-based video editing: A survey

W Sun, RC Tu, J Liao, D Tao - arxiv preprint arxiv:2407.07111, 2024 - arxiv.org
The rapid development of diffusion models (DMs) has significantly advanced image and
video applications, making" what you want is what you see" a reality. Among these, video …

Revideo: Remake a video with motion and content control

C Mou, M Cao, X Wang, Z Zhang… - Advances in Neural …, 2025 - proceedings.neurips.cc
Despite significant advancements in video generation and editing using diffusion models,
achieving accurate and localized video editing remains a substantial challenge …

Mofa-video: Controllable image animation via generative motion field adaptions in frozen image-to-video diffusion model

M Niu, X Cun, X Wang, Y Zhang, Y Shan… - European Conference on …, 2024 - Springer
We present MOFA-Video, an advanced controllable image animation method that generates
video from the given image using various additional controllable signals (such as human …