A review of single-source deep unsupervised visual domain adaptation

S Zhao, X Yue, S Zhang, B Li, H Zhao… - … on Neural Networks …, 2020 - ieeexplore.ieee.org
Large-scale labeled training datasets have enabled deep neural networks to excel across a
wide range of benchmark vision tasks. However, in many applications, it is prohibitively …

NTIRE 2023 video colorization challenge

X Kang, X Lin, K Zhang, Z Hui… - Proceedings of the …, 2023 - openaccess.thecvf.com
This paper reviews the video colorization challenge on the New Trends in Image
Restoration and Enhancement (NTIRE) workshop, held in conjunction with CVPR 2023. The …

Fatezero: Fusing attentions for zero-shot text-based video editing

C Qi, X Cun, Y Zhang, C Lei, X Wang… - Proceedings of the …, 2023 - openaccess.thecvf.com
The diffusion-based generative models have achieved remarkable success in text-based
image generation. However, since it contains enormous randomness in generation …

Tokenflow: Consistent diffusion features for consistent video editing

M Geyer, O Bar-Tal, S Bagon, T Dekel - arxiv preprint arxiv:2307.10373, 2023 - arxiv.org
The generative AI revolution has recently expanded to videos. Nevertheless, current state-of-
the-art video models are still lagging behind image models in terms of visual quality and …

Controlvideo: Training-free controllable text-to-video generation

Y Zhang, Y Wei, D Jiang, X Zhang, W Zuo… - arxiv preprint arxiv …, 2023 - arxiv.org
Text-driven diffusion models have unlocked unprecedented abilities in image generation,
whereas their video counterpart still lags behind due to the excessive training cost of …

Viewdiff: 3d-consistent image generation with text-to-image models

L Höllein, A Božič, N Müller… - Proceedings of the …, 2024 - openaccess.thecvf.com
Abstract 3D asset generation is getting massive amounts of attention inspired by the recent
success on text-guided 2D content creation. Existing text-to-3D methods use pretrained text …

Consistent view synthesis with pose-guided diffusion models

HY Tseng, Q Li, C Kim, S Alsisan… - Proceedings of the …, 2023 - openaccess.thecvf.com
Novel view synthesis from a single image has been a cornerstone problem for many Virtual
Reality applications that provide immersive experiences. However, most existing techniques …

Evalcrafter: Benchmarking and evaluating large video generation models

Y Liu, X Cun, X Liu, X Wang, Y Zhang… - Proceedings of the …, 2024 - openaccess.thecvf.com
The vision and language generative models have been overgrown in recent years. For
video generation various open-sourced models and public-available services have been …

Propainter: Improving propagation and transformer for video inpainting

S Zhou, C Li, KCK Chan… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Flow-based propagation and spatiotemporal Transformer are two mainstream mechanisms
in video inpainting (VI). Despite the effectiveness of these components, they still suffer from …

Towards an end-to-end framework for flow-guided video inpainting

Z Li, CZ Lu, J Qin, CL Guo… - Proceedings of the IEEE …, 2022 - openaccess.thecvf.com
Optical flow, which captures motion information across frames, is exploited in recent video
inpainting methods through propagating pixels along its trajectories. However, the hand …