Gaussianavatar: Towards realistic human avatar modeling from a single video via animatable 3d gaussians
We present GaussianAvatar an efficient approach to creating realistic human avatars with
dynamic 3D appearances from a single video. We start by introducing animatable 3D …
dynamic 3D appearances from a single video. We start by introducing animatable 3D …
Animatable gaussians: Learning pose-dependent gaussian maps for high-fidelity human avatar modeling
Modeling animatable human avatars from RGB videos is a long-standing and challenging
problem. Recent works usually adopt MLP-based neural radiance fields (NeRF) to represent …
problem. Recent works usually adopt MLP-based neural radiance fields (NeRF) to represent …
Humanrf: High-fidelity neural radiance fields for humans in motion
Representing human performance at high-fidelity is an essential building block in diverse
applications, such as film production, computer games or videoconferencing. To close the …
applications, such as film production, computer games or videoconferencing. To close the …
Pointavatar: Deformable point-based head avatars from videos
The ability to create realistic animatable and relightable head avatars from casual video
sequences would open up wide ranging applications in communication and entertainment …
sequences would open up wide ranging applications in communication and entertainment …
Gart: Gaussian articulated template models
Abstract We introduce Gaussian Articulated Template Model (GART) an explicit efficient and
expressive representation for non-rigid articulated subject capturing and rendering from …
expressive representation for non-rigid articulated subject capturing and rendering from …
Vid2avatar: 3d avatar reconstruction from videos in the wild via self-supervised scene decomposition
Abstract We present Vid2Avatar, a method to learn human avatars from monocular in-the-
wild videos. Reconstructing humans that move naturally from monocular in-the-wild videos …
wild videos. Reconstructing humans that move naturally from monocular in-the-wild videos …
3dgs-avatar: Animatable avatars via deformable 3d gaussian splatting
We introduce an approach that creates animatable human avatars from monocular videos
using 3D Gaussian Splatting (3DGS). Existing methods based on neural radiance fields …
using 3D Gaussian Splatting (3DGS). Existing methods based on neural radiance fields …
Instantavatar: Learning avatars from monocular video in 60 seconds
In this paper, we take one step further towards real-world applicability of monocular neural
avatar reconstruction by contributing InstantAvatar, a system that can reconstruct human …
avatar reconstruction by contributing InstantAvatar, a system that can reconstruct human …
Avatarrex: Real-time expressive full-body avatars
We present AvatarReX, a new method for learning NeRF-based full-body avatars from video
data. The learnt avatar not only provides expressive control of the body, hands and the face …
data. The learnt avatar not only provides expressive control of the body, hands and the face …
Gauhuman: Articulated gaussian splatting from monocular human videos
We present GauHuman a 3D human model with Gaussian Splatting for both fast training (1 2
minutes) and real-time rendering (up to 189 FPS) compared with existing NeRF-based …
minutes) and real-time rendering (up to 189 FPS) compared with existing NeRF-based …