Gaussianavatar: Towards realistic human avatar modeling from a single video via animatable 3d gaussians

L Hu, H Zhang, Y Zhang, B Zhou… - Proceedings of the …, 2024 - openaccess.thecvf.com
We present GaussianAvatar an efficient approach to creating realistic human avatars with
dynamic 3D appearances from a single video. We start by introducing animatable 3D …

Animatable gaussians: Learning pose-dependent gaussian maps for high-fidelity human avatar modeling

Z Li, Z Zheng, L Wang, Y Liu - Proceedings of the IEEE/CVF …, 2024 - openaccess.thecvf.com
Modeling animatable human avatars from RGB videos is a long-standing and challenging
problem. Recent works usually adopt MLP-based neural radiance fields (NeRF) to represent …

Humanrf: High-fidelity neural radiance fields for humans in motion

M Işık, M Rünz, M Georgopoulos, T Khakhulin… - ACM Transactions on …, 2023 - dl.acm.org
Representing human performance at high-fidelity is an essential building block in diverse
applications, such as film production, computer games or videoconferencing. To close the …

Pointavatar: Deformable point-based head avatars from videos

Y Zheng, W Yifan, G Wetzstein… - Proceedings of the …, 2023 - openaccess.thecvf.com
The ability to create realistic animatable and relightable head avatars from casual video
sequences would open up wide ranging applications in communication and entertainment …

Gart: Gaussian articulated template models

J Lei, Y Wang, G Pavlakos, L Liu… - Proceedings of the …, 2024 - openaccess.thecvf.com
Abstract We introduce Gaussian Articulated Template Model (GART) an explicit efficient and
expressive representation for non-rigid articulated subject capturing and rendering from …

Vid2avatar: 3d avatar reconstruction from videos in the wild via self-supervised scene decomposition

C Guo, T Jiang, X Chen, J Song… - Proceedings of the …, 2023 - openaccess.thecvf.com
Abstract We present Vid2Avatar, a method to learn human avatars from monocular in-the-
wild videos. Reconstructing humans that move naturally from monocular in-the-wild videos …

3dgs-avatar: Animatable avatars via deformable 3d gaussian splatting

Z Qian, S Wang, M Mihajlovic… - Proceedings of the …, 2024 - openaccess.thecvf.com
We introduce an approach that creates animatable human avatars from monocular videos
using 3D Gaussian Splatting (3DGS). Existing methods based on neural radiance fields …

Instantavatar: Learning avatars from monocular video in 60 seconds

T Jiang, X Chen, J Song… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
In this paper, we take one step further towards real-world applicability of monocular neural
avatar reconstruction by contributing InstantAvatar, a system that can reconstruct human …

Avatarrex: Real-time expressive full-body avatars

Z Zheng, X Zhao, H Zhang, B Liu, Y Liu - ACM Transactions on Graphics …, 2023 - dl.acm.org
We present AvatarReX, a new method for learning NeRF-based full-body avatars from video
data. The learnt avatar not only provides expressive control of the body, hands and the face …

Gauhuman: Articulated gaussian splatting from monocular human videos

S Hu, T Hu, Z Liu - … of the IEEE/CVF Conference on …, 2024 - openaccess.thecvf.com
We present GauHuman a 3D human model with Gaussian Splatting for both fast training (1 2
minutes) and real-time rendering (up to 189 FPS) compared with existing NeRF-based …