Linfusion: 1 gpu, 1 minute, 16k image

S Liu, W Yu, Z Tan, X Wang - arxiv preprint arxiv:2409.02097, 2024 - arxiv.org
Modern diffusion models, particularly those utilizing a Transformer-based UNet for
denoising, rely heavily on self-attention operations to manage complex spatial relationships …

Rectified diffusion: Straightness is not your need in rectified flow

FY Wang, L Yang, Z Huang, M Wang, H Li - arxiv preprint arxiv …, 2024 - arxiv.org
Diffusion models have greatly improved visual generation but are hindered by slow
generation speed due to the computationally intensive nature of solving generative ODEs …

Osv: One step is enough for high-quality image to video generation

X Mao, Z Jiang, FY Wang, W Zhu, J Zhang… - arxiv preprint arxiv …, 2024 - arxiv.org
Video diffusion models have shown great potential in generating high-quality videos,
making them an increasingly popular focus. However, their inherent iterative nature leads to …

One-step diffusion policy: Fast visuomotor policies via diffusion distillation

Z Wang, Z Li, A Mandlekar, Z Xu, J Fan… - arxiv preprint arxiv …, 2024 - arxiv.org
Diffusion models, praised for their success in generative tasks, are increasingly being
applied to robotics, demonstrating exceptional performance in behavior cloning. However …

Flow generator matching

Z Huang, Z Geng, W Luo, G Qi - arxiv preprint arxiv:2410.19310, 2024 - arxiv.org
In the realm of Artificial Intelligence Generated Content (AIGC), flow-matching models have
emerged as a powerhouse, achieving success due to their robust theoretical underpinnings …

Long and Short Guidance in Score identity Distillation for One-Step Text-to-Image Generation

M Zhou, Z Wang, H Zheng, H Huang - arxiv preprint arxiv:2406.01561, 2024 - arxiv.org
Diffusion-based text-to-image generation models trained on extensive text-image pairs have
shown the capacity to generate photorealistic images consistent with textual descriptions …

Nitrofusion: High-fidelity single-step diffusion through dynamic adversarial training

DY Chen, H Bandyopadhyay, K Zou… - arxiv preprint arxiv …, 2024 - arxiv.org
We introduce NitroFusion, a fundamentally different approach to single-step diffusion that
achieves high-quality generation through a dynamic adversarial framework. While one-step …

Score Forgetting Distillation: A Swift, Data-Free Method for Machine Unlearning in Diffusion Models

T Chen, S Zhang, M Zhou - arxiv preprint arxiv:2409.11219, 2024 - arxiv.org
The machine learning community is increasingly recognizing the importance of fostering
trust and safety in modern generative AI (GenAI) models. We posit machine unlearning (MU) …

Stable Consistency Tuning: Understanding and Improving Consistency Models

FY Wang, Z Geng, H Li - arxiv preprint arxiv:2410.18958, 2024 - arxiv.org
Diffusion models achieve superior generation quality but suffer from slow generation speed
due to the iterative nature of denoising. In contrast, consistency models, a new generative …

Multi-student Diffusion Distillation for Better One-step Generators

Y Song, J Lorraine, W Nie, K Kreis, J Lucas - arxiv preprint arxiv …, 2024 - arxiv.org
Diffusion models achieve high-quality sample generation at the cost of a lengthy multistep
inference procedure. To overcome this, diffusion distillation techniques produce student …