Multi-concept customization of text-to-image diffusion

N Kumari, B Zhang, R Zhang… - Proceedings of the …, 2023 - openaccess.thecvf.com
While generative models produce high-quality images of concepts learned from a large-
scale database, a user often wishes to synthesize instantiations of their own concepts (for …

Patch diffusion: Faster and more data-efficient training of diffusion models

Z Wang, Y Jiang, H Zheng, P Wang… - Advances in neural …, 2023 - proceedings.neurips.cc
Diffusion models are powerful, but they require a lot of time and data to train. We propose
Patch Diffusion, a generic patch-wise training framework, to significantly reduce the training …

Ablating concepts in text-to-image diffusion models

N Kumari, B Zhang, SY Wang… - Proceedings of the …, 2023 - openaccess.thecvf.com
Large-scale text-to-image diffusion models can generate high-fidelity images with powerful
compositional ability. However, these models are typically trained on an enormous amount …

Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation

N Ruiz, Y Li, V Jampani, Y Pritch… - Proceedings of the …, 2023 - openaccess.thecvf.com
Large text-to-image models achieved a remarkable leap in the evolution of AI, enabling high-
quality and diverse synthesis of images from a given text prompt. However, these models …

Dreamvideo: Composing your dream videos with customized subject and motion

Y Wei, S Zhang, Z Qing, H Yuan, Z Liu… - Proceedings of the …, 2024 - openaccess.thecvf.com
Customized generation using diffusion models has made impressive progress in image
generation but remains unsatisfactory in the challenging video generation task as it requires …

Stylegan-nada: Clip-guided domain adaptation of image generators

R Gal, O Patashnik, H Maron, AH Bermano… - ACM Transactions on …, 2022 - dl.acm.org
Can a generative model be trained to produce images from a specific domain, guided only
by a text prompt, without seeing any image? In other words: can an image generator be …

A comprehensive survey on data-efficient GANs in image generation

Z Li, B **a, J Zhang, C Wang, B Li - arxiv preprint arxiv:2204.08329, 2022 - arxiv.org
Generative Adversarial Networks (GANs) have achieved remarkable achievements in image
synthesis. These successes of GANs rely on large scale datasets, requiring too much cost …

Few-shot image generation via cross-domain correspondence

U Ojha, Y Li, J Lu, AA Efros, YJ Lee… - Proceedings of the …, 2021 - openaccess.thecvf.com
Training generative models, such as GANs, on a target domain containing limited examples
(eg, 10) can easily result in overfitting. In this work, we seek to utilize a large source domain …

Anomalydiffusion: Few-shot anomaly image generation with diffusion model

T Hu, J Zhang, R Yi, Y Du, X Chen, L Liu… - Proceedings of the …, 2024 - ojs.aaai.org
Anomaly inspection plays an important role in industrial manufacture. Existing anomaly
inspection methods are limited in their performance due to insufficient anomaly data …

Image-to-image translation: Methods and applications

Y Pang, J Lin, T Qin, Z Chen - IEEE Transactions on Multimedia, 2021 - ieeexplore.ieee.org
Image-to-image translation (I2I) aims to transfer images from a source domain to a target
domain while preserving the content representations. I2I has drawn increasing attention and …