A comprehensive survey on test-time adaptation under distribution shifts

J Liang, R He, T Tan - International Journal of Computer Vision, 2024 - Springer
Abstract Machine learning methods strive to acquire a robust model during the training
process that can effectively generalize to test samples, even in the presence of distribution …

State of the art on neural rendering

A Tewari, O Fried, J Thies, V Sitzmann… - Computer Graphics …, 2020 - Wiley Online Library
Efficient rendering of photo‐realistic virtual worlds is a long standing effort of computer
graphics. Modern graphics techniques have succeeded in synthesizing photo‐realistic …

Imagic: Text-based real image editing with diffusion models

B Kawar, S Zada, O Lang, O Tov… - Proceedings of the …, 2023 - openaccess.thecvf.com
Text-conditioned image editing has recently attracted considerable interest. However, most
methods are currently limited to one of the following: specific editing types (eg, object …

An image is worth one word: Personalizing text-to-image generation using textual inversion

R Gal, Y Alaluf, Y Atzmon, O Patashnik… - arxiv preprint arxiv …, 2022 - arxiv.org
Text-to-image models offer unprecedented freedom to guide creation through natural
language. Yet, it is unclear how such freedom can be exercised to generate images of …

Sdedit: Guided image synthesis and editing with stochastic differential equations

C Meng, Y He, Y Song, J Song, J Wu, JY Zhu… - arxiv preprint arxiv …, 2021 - arxiv.org
Guided image synthesis enables everyday users to create and edit photo-realistic images
with minimum effort. The key challenge is balancing faithfulness to the user input (eg, hand …

Zero-shot image-to-image translation

G Parmar, K Kumar Singh, R Zhang, Y Li, J Lu… - ACM SIGGRAPH 2023 …, 2023 - dl.acm.org
Large-scale text-to-image generative models have shown their remarkable ability to
synthesize diverse, high-quality images. However, directly applying these models for real …

Diffusionclip: Text-guided diffusion models for robust image manipulation

G Kim, T Kwon, JC Ye - … of the IEEE/CVF conference on …, 2022 - openaccess.thecvf.com
Recently, GAN inversion methods combined with Contrastive Language-Image Pretraining
(CLIP) enables zero-shot image manipulation guided by text prompts. However, their …

Paint by example: Exemplar-based image editing with diffusion models

B Yang, S Gu, B Zhang, T Zhang… - Proceedings of the …, 2023 - openaccess.thecvf.com
Abstract Language-guided image editing has achieved great success recently. In this paper,
we investigate exemplar-guided image editing for more precise control. We achieve this …

Blended latent diffusion

O Avrahami, O Fried, D Lischinski - ACM transactions on graphics (TOG), 2023 - dl.acm.org
The tremendous progress in neural image generation, coupled with the emergence of
seemingly omnipotent vision-language models has finally enabled text-based interfaces for …

Pivotal tuning for latent-based editing of real images

D Roich, R Mokady, AH Bermano… - ACM Transactions on …, 2022 - dl.acm.org
Recently, numerous facial editing techniques have been proposed that leverage the
generative power of a pretrained StyleGAN. To successfully edit an image this way, one …