Training-free consistent text-to-image generation

Y Tewel, O Kaduri, R Gal, Y Kasten, L Wolf… - ACM Transactions on …, 2024 - dl.acm.org
Text-to-image models offer a new level of creative flexibility by allowing users to guide the
image generation process through natural language. However, using these models to …

Splicing vit features for semantic appearance transfer

N Tumanyan, O Bar-Tal, S Bagon… - Proceedings of the …, 2022 - openaccess.thecvf.com
We present a method for semantically transferring the visual appearance of one natural
image to another. Specifically, our goal is to generate an image in which objects in a source …

Cross-image attention for zero-shot appearance transfer

Y Alaluf, D Garibi, O Patashnik… - ACM SIGGRAPH 2024 …, 2024 - dl.acm.org
Recent advancements in text-to-image generative models have demonstrated a remarkable
ability to capture a deep semantic understanding of images. In this work, we leverage this …

Generative adversarial networks for image and video synthesis: Algorithms and applications

MY Liu, X Huang, J Yu, TC Wang… - Proceedings of the …, 2021 - ieeexplore.ieee.org
The generative adversarial network (GAN) framework has emerged as a powerful tool for
various image and video synthesis tasks, allowing the synthesis of visual content in an …

Improved techniques for training single-image gans

T Hinz, M Fisher, O Wang… - Proceedings of the IEEE …, 2021 - openaccess.thecvf.com
Recently there has been an interest in the potential of learning generative models from a
single image, as opposed to from a large dataset. This task is of significance, as it means …

Drop the gan: In defense of patches nearest neighbors as single image generative models

N Granot, B Feinstein, A Shocher… - Proceedings of the …, 2022 - openaccess.thecvf.com
Image manipulation dates back long before the deep learning era. The classical prevailing
approaches were based on maximizing patch similarity between the input and generated …

Disentangling Structure and Appearance in ViT Feature Space

N Tumanyan, O Bar-Tal, S Amir, S Bagon… - ACM Transactions on …, 2023 - dl.acm.org
We present a method for semantically transferring the visual appearance of one natural
image to another. Specifically, our goal is to generate an image in which objects in a source …

Seamlessgan: Self-supervised synthesis of tileable texture maps

C Rodriguez-Pardo, E Garces - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
Real-time graphics applications require high-quality textured materials to convey realism in
virtual environments. Generating these textures is challenging as they need to be visually …

Diverse generation from a single video made possible

N Haim, B Feinstein, N Granot, A Shocher… - … on Computer Vision, 2022 - Springer
GANs are able to perform generation and manipulation tasks, trained on a single video.
However, these single video GANs require unreasonable amount of time to train on a single …