Knowledge distillation and student-teacher learning for visual intelligence: A review and new outlooks
L Wang, KJ Yoon - IEEE transactions on pattern analysis and …, 2021 - ieeexplore.ieee.org
Deep neural models, in recent years, have been successful in almost every field, even
solving the most complex problem statements. However, these models are huge in size with …
solving the most complex problem statements. However, these models are huge in size with …
Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation
Large text-to-image models achieved a remarkable leap in the evolution of AI, enabling high-
quality and diverse synthesis of images from a given text prompt. However, these models …
quality and diverse synthesis of images from a given text prompt. However, these models …
Multi-concept customization of text-to-image diffusion
While generative models produce high-quality images of concepts learned from a large-
scale database, a user often wishes to synthesize instantiations of their own concepts (for …
scale database, a user often wishes to synthesize instantiations of their own concepts (for …
Stylegan-nada: Clip-guided domain adaptation of image generators
Can a generative model be trained to produce images from a specific domain, guided only
by a text prompt, without seeing any image? In other words: can an image generator be …
by a text prompt, without seeing any image? In other words: can an image generator be …
Training generative adversarial networks with limited data
Training generative adversarial networks (GAN) using too little data typically leads to
discriminator overfitting, causing training to diverge. We propose an adaptive discriminator …
discriminator overfitting, causing training to diverge. We propose an adaptive discriminator …
Ablating concepts in text-to-image diffusion models
Large-scale text-to-image diffusion models can generate high-fidelity images with powerful
compositional ability. However, these models are typically trained on an enormous amount …
compositional ability. However, these models are typically trained on an enormous amount …
[HTML][HTML] Applications and techniques for fast machine learning in science
In this community review report, we discuss applications and techniques for fast machine
learning (ML) in science—the concept of integrating powerful ML methods into the real-time …
learning (ML) in science—the concept of integrating powerful ML methods into the real-time …
Differentiable augmentation for data-efficient gan training
The performance of generative adversarial networks (GANs) heavily deteriorates given a
limited amount of training data. This is mainly because the discriminatorsis memorizing the …
limited amount of training data. This is mainly because the discriminatorsis memorizing the …
Gan prior embedded network for blind face restoration in the wild
Blind face restoration (BFR) from severely degraded face images in the wild is a very
challenging problem. Due to the high illness of the problem and the complex unknown …
challenging problem. Due to the high illness of the problem and the complex unknown …
Few-shot image generation via cross-domain correspondence
Training generative models, such as GANs, on a target domain containing limited examples
(eg, 10) can easily result in overfitting. In this work, we seek to utilize a large source domain …
(eg, 10) can easily result in overfitting. In this work, we seek to utilize a large source domain …