From slow bidirectional to fast causal video generators
Current video diffusion models achieve impressive generation quality but struggle in
interactive applications due to bidirectional attention dependencies. The generation of a …
interactive applications due to bidirectional attention dependencies. The generation of a …
Stable Consistency Tuning: Understanding and Improving Consistency Models
Diffusion models achieve superior generation quality but suffer from slow generation speed
due to the iterative nature of denoising. In contrast, consistency models, a new generative …
due to the iterative nature of denoising. In contrast, consistency models, a new generative …
Individual Content and Motion Dynamics Preserved Pruning for Video Diffusion Models
The high computational cost and slow inference time are major obstacles to deploying the
video diffusion model (VDM) in practical applications. To overcome this, we introduce a new …
video diffusion model (VDM) in practical applications. To overcome this, we introduce a new …
SnapGen-V: Generating a Five-Second Video within Five Seconds on a Mobile Device
We have witnessed the unprecedented success of diffusion-based video generation over
the past year. Recently proposed models from the community have wielded the power to …
the past year. Recently proposed models from the community have wielded the power to …
Real-time One-Step Diffusion-based Expressive Portrait Videos Generation
Latent diffusion models have made great strides in generating expressive portrait videos
with accurate lip-sync and natural motion from a single reference image and audio input …
with accurate lip-sync and natural motion from a single reference image and audio input …