Diff-instruct: A universal approach for transferring knowledge from pre-trained diffusion models W Luo, T Hu, S Zhang, J Sun, Z Li, Z Zhang Advances in Neural Information Processing Systems 36, 2024 | 75 | 2024 |
A comprehensive survey on knowledge distillation of diffusion models W Luo arXiv preprint arXiv:2304.04262, 2023 | 32 | 2023 |
SA-Solver: Stochastic Adams solver for fast sampling of diffusion models S Xue, M Yi, W Luo, S Zhang, J Sun, Z Li, ZM Ma Advances in Neural Information Processing Systems 36, 2024 | 17 | 2024 |
Consistency Models Made Easy Z Geng, A Pokle, W Luo, J Lin, JZ Kolter arXiv preprint arXiv:2406.14548, 2024 | 12 | 2024 |
Stochastic adams solver for fast sampling of diffusion models S Xue, M Yi, W Luo, S Zhang, J Sun, Z Li arXiv preprint arXiv:2309.05019, 2023 | 10 | 2023 |
Enhancing adversarial robustness via score-based optimization B Zhang, W Luo, Z Zhang Advances in Neural Information Processing Systems 36, 51810-51829, 2023 | 9 | 2023 |
Integrating amortized inference with diffusion models for learning clean distribution from corrupted images Y Wang, W Bai, W Luo, W Chen, H Sun arXiv preprint arXiv:2407.11162, 2024 | 7 | 2024 |
Variational Schr\" odinger Diffusion Models W Deng, W Luo, Y Tan, M Biloš, Y Chen, Y Nevmyvaka, RTQ Chen arXiv preprint arXiv:2405.04795, 2024 | 7 | 2024 |
Entropy-based Training Methods for Scalable Neural Implicit Samplers W Luo, B Zhang, Z Zhang Advances in Neural Information Processing Systems 36, 2024 | 7 | 2024 |
Purify++: Improving Diffusion-Purification with Advanced Diffusion Models and Control of Randomness B Zhang, W Luo, Z Zhang arXiv preprint arXiv:2310.18762, 2023 | 6 | 2023 |
One-step diffusion distillation through score implicit matching W Luo, Z Huang, Z Geng, JZ Kolter, G Qi arXiv preprint arXiv:2410.16794, 2024 | 4 | 2024 |
Diff-instruct++: Training one-step text-to-image generator model to align with human preferences W Luo arXiv preprint arXiv:2410.18881, 2024 | 3 | 2024 |
A Lipschitz bandits approach for continuous hyperparameter optimization Y Feng, W Luo, Y Huang, T Wang arXiv preprint arXiv:2302.01539, 2023 | 3 | 2023 |
Diff-Instruct*: Towards Human-Preferred One-step Text-to-image Generative Models W Luo, C Zhang, D Zhang, Z Geng arXiv preprint arXiv:2410.20898, 2024 | 2 | 2024 |
Flow generator matching Z Huang, Z Geng, W Luo, G Qi arXiv preprint arXiv:2410.19310, 2024 | 2 | 2024 |
Training energy-based models with diffusion contrastive divergences W Luo, H Jiang, T Hu, J Sun, Z Li, Z Zhang arXiv preprint arXiv:2307.01668, 2023 | 2 | 2023 |
Schedule On the Fly: Diffusion Time Prediction for Faster and Better Image Generation Z Ye, Z Chen, T Li, Z Huang, W Luo, GJ Qi arXiv preprint arXiv:2412.01243, 2024 | 1 | 2024 |
Self-Guidance: Boosting Flow and Diffusion Generation on Their Own T Li, W Luo, Z Chen, L Ma, GJ Qi arXiv preprint arXiv:2412.05827, 2024 | | 2024 |
Denoising Fisher Training For Neural Implicit Samplers W Luo, W Deng arXiv preprint arXiv:2411.01453, 2024 | | 2024 |
Data Prediction Denoising Models: The Pupil Outdoes the Master W Luo, Z Zhang | | |