Training-free consistent text-to-image generation
Text-to-image models offer a new level of creative flexibility by allowing users to guide the
image generation process through natural language. However, using these models to …
image generation process through natural language. However, using these models to …
Splicing vit features for semantic appearance transfer
We present a method for semantically transferring the visual appearance of one natural
image to another. Specifically, our goal is to generate an image in which objects in a source …
image to another. Specifically, our goal is to generate an image in which objects in a source …
Cross-image attention for zero-shot appearance transfer
Recent advancements in text-to-image generative models have demonstrated a remarkable
ability to capture a deep semantic understanding of images. In this work, we leverage this …
ability to capture a deep semantic understanding of images. In this work, we leverage this …
Generative adversarial networks for image and video synthesis: Algorithms and applications
The generative adversarial network (GAN) framework has emerged as a powerful tool for
various image and video synthesis tasks, allowing the synthesis of visual content in an …
various image and video synthesis tasks, allowing the synthesis of visual content in an …
Improved techniques for training single-image gans
Recently there has been an interest in the potential of learning generative models from a
single image, as opposed to from a large dataset. This task is of significance, as it means …
single image, as opposed to from a large dataset. This task is of significance, as it means …
Drop the gan: In defense of patches nearest neighbors as single image generative models
Image manipulation dates back long before the deep learning era. The classical prevailing
approaches were based on maximizing patch similarity between the input and generated …
approaches were based on maximizing patch similarity between the input and generated …
Disentangling Structure and Appearance in ViT Feature Space
We present a method for semantically transferring the visual appearance of one natural
image to another. Specifically, our goal is to generate an image in which objects in a source …
image to another. Specifically, our goal is to generate an image in which objects in a source …
Seamlessgan: Self-supervised synthesis of tileable texture maps
Real-time graphics applications require high-quality textured materials to convey realism in
virtual environments. Generating these textures is challenging as they need to be visually …
virtual environments. Generating these textures is challenging as they need to be visually …
Diverse generation from a single video made possible
GANs are able to perform generation and manipulation tasks, trained on a single video.
However, these single video GANs require unreasonable amount of time to train on a single …
However, these single video GANs require unreasonable amount of time to train on a single …