Progressive disentangled representation learning for fine-grained controllable talking head synthesis

D Wang, Y Deng, Z Yin, HY Shum… - Proceedings of the …, 2023 - openaccess.thecvf.com
We present a novel one-shot talking head synthesis method that achieves disentangled and
fine-grained control over lip motion, eye gaze&blink, head pose, and emotional expression …

Relightable neural human assets from multi-view gradient illuminations

T Zhou, K He, D Wu, T Xu, Q Zhang… - Proceedings of the …, 2023 - openaccess.thecvf.com
Human modeling and relighting are two fundamental problems in computer vision and
graphics, where high-quality datasets can largely facilitate related research. However, most …

Facegan: Facial attribute controllable reenactment gan

S Tripathy, J Kannala, E Rahtu - Proceedings of the IEEE …, 2021 - openaccess.thecvf.com
The face reenactment is a popular facial animation method where the person's identity is
taken from the source image and the facial motion from the driving image. Recent works …

VOODOO 3D: Volumetric Portrait Disentanglement for One-Shot 3D Head Reenactment

P Tran, E Zakharov, LN Ho, AT Tran… - Proceedings of the …, 2024 - openaccess.thecvf.com
We present a 3D-aware one-shot head reenactment method based on a fully volumetric
neural disentanglement framework for source appearance and driver expressions. Our …

DiffPortrait3D: Controllable Diffusion for Zero-Shot Portrait View Synthesis

Y Gu, H Xu, Y **e, G Song, Y Shi… - Proceedings of the …, 2024 - openaccess.thecvf.com
We present DiffPortrait3D a conditional diffusion model that is capable of synthesizing 3D-
consistent photo-realistic novel views from as few as a single in-the-wild portrait. Specifically …

Cross-domain and disentangled face manipulation with 3d guidance

C Wang, M Chai, M He, D Chen… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
Face image manipulation via three-dimensional guidance has been widely applied in
various interactive scenarios due to its semantically-meaningful understanding and user …

High-fidelity face reenactment via identity-matched correspondence learning

H Xue, J Ling, A Tang, L Song, R **e… - ACM Transactions on …, 2023 - dl.acm.org
Face reenactment aims to generate an animation of a source face using the poses and
expressions from a target face. Although recent methods have made remarkable progress …

Disunknown: Distilling unknown factors for disentanglement learning

S **ang, Y Gu, P **ang, M Chai, H Li… - Proceedings of the …, 2021 - openaccess.thecvf.com
Disentangling data into interpretable and independent factors is critical for controllable
generation tasks. With the availability of labeled data, supervision can help enforce the …

Towards High-Fidelity 3D Portrait Generation with Rich Details by Cross-View Prior-Aware Diffusion

H Wei, W Han, X Dong, J Shen - arxiv preprint arxiv:2411.10369, 2024 - arxiv.org
Recent diffusion-based Single-image 3D portrait generation methods typically employ 2D
diffusion models to provide multi-view knowledge, which is then distilled into 3D …

Emotional Conversation: Empowering Talking Faces with Cohesive Expression, Gaze and Pose Generation

J Liang, F Lu - arxiv preprint arxiv:2406.07895, 2024 - arxiv.org
Vivid talking face generation holds immense potential applications across diverse
multimedia domains, such as film and game production. While existing methods accurately …