Ai-generated content (aigc) for various data modalities: A survey

LG Foo, H Rahmani, J Liu - arxiv preprint arxiv:2308.14177, 2023 - arxiv.org
AI-generated content (AIGC) methods aim to produce text, images, videos, 3D assets, and
other media using AI algorithms. Due to its wide range of applications and the demonstrated …

Gaussian head avatar: Ultra high-fidelity head avatar via dynamic gaussians

Y Xu, B Chen, Z Li, H Zhang, L Wang… - Proceedings of the …, 2024 - openaccess.thecvf.com
Creating high-fidelity 3D head avatars has always been a research hotspot but there
remains a great challenge under lightweight sparse view setups. In this paper we propose …

Pointavatar: Deformable point-based head avatars from videos

Y Zheng, W Yifan, G Wetzstein… - Proceedings of the …, 2023 - openaccess.thecvf.com
The ability to create realistic animatable and relightable head avatars from casual video
sequences would open up wide ranging applications in communication and entertainment …

Headsculpt: Crafting 3d head avatars with text

X Han, Y Cao, K Han, X Zhu, J Deng… - Advances in …, 2023 - proceedings.neurips.cc
Recently, text-guided 3D generative methods have made remarkable advancements in
producing high-quality textures and geometry, capitalizing on the proliferation of large vision …

Flashavatar: High-fidelity head avatar with efficient gaussian embedding

J **ang, X Gao, Y Guo, J Zhang - Proceedings of the IEEE …, 2024 - openaccess.thecvf.com
We propose FlashAvatar a novel and lightweight 3D animatable avatar representation that
could reconstruct a digital avatar from a short monocular video sequence in minutes and …

One-shot high-fidelity talking-head synthesis with deformable neural radiance field

W Li, L Zhang, D Wang, B Zhao… - Proceedings of the …, 2023 - openaccess.thecvf.com
Talking head generation aims to generate faces that maintain the identity information of the
source image and imitate the motion of the driving image. Most pioneering methods rely …

Real-time radiance fields for single-image portrait view synthesis

A Trevithick, M Chan, M Stengel, E Chan… - ACM Transactions on …, 2023 - dl.acm.org
We present a one-shot method to infer and render a photorealistic 3D representation from a
single unposed image (eg, face portrait) in real-time. Given a single RGB input, our image …

Hyperreenact: one-shot reenactment via jointly learning to refine and retarget faces

S Bounareli, C Tzelepis, V Argyriou… - Proceedings of the …, 2023 - openaccess.thecvf.com
In this paper, we present our method for neural face reenactment, called HyperReenact, that
aims to generate realistic talking head images of a source identity, driven by a target facial …

Follow-your-emoji: Fine-controllable and expressive freestyle portrait animation

Y Ma, H Liu, H Wang, H Pan, Y He, J Yuan… - SIGGRAPH Asia 2024 …, 2024 - dl.acm.org
We present Follow-Your-Emoji, a diffusion-based framework for portrait animation, which
animates a reference portrait with target landmark sequences. The main challenge of portrait …

Diffusionavatars: Deferred diffusion for high-fidelity 3d head avatars

T Kirschstein, S Giebenhain… - Proceedings of the …, 2024 - openaccess.thecvf.com
DiffusionAvatars synthesizes a high-fidelity 3D head avatar of a person offering intuitive
control over both pose and expression. We propose a diffusion-based neural renderer that …