K-planes: Explicit radiance fields in space, time, and appearance

S Fridovich-Keil, G Meanti… - Proceedings of the …, 2023 - openaccess.thecvf.com
We introduce k-planes, a white-box model for radiance fields in arbitrary dimensions. Our
model uses d-choose-2 planes to represent a d-dimensional scene, providing a seamless …

Hexplane: A fast representation for dynamic scenes

A Cao, J Johnson - … of the IEEE/CVF Conference on …, 2023 - openaccess.thecvf.com
Modeling and re-rendering dynamic 3D scenes is a challenging task in 3D vision. Prior
approaches build on NeRF and rely on implicit representations. This is slow since it requires …

Deformable 3d gaussians for high-fidelity monocular dynamic scene reconstruction

Z Yang, X Gao, W Zhou, S Jiao… - Proceedings of the …, 2024 - openaccess.thecvf.com
Implicit neural representation has paved the way for new approaches to dynamic scene
reconstruction. Nonetheless cutting-edge dynamic neural rendering methods rely heavily on …

Align your gaussians: Text-to-4d with dynamic 3d gaussians and composed diffusion models

H Ling, SW Kim, A Torralba… - Proceedings of the …, 2024 - openaccess.thecvf.com
Text-guided diffusion models have revolutionized image and video generation and have
also been successfully used for optimization-based 3D object synthesis. Here we instead …

State of the Art in Dense Monocular Non‐Rigid 3D Reconstruction

E Tretschk, N Kairanda, M BR, R Dabral… - Computer Graphics …, 2023 - Wiley Online Library
Abstract 3D reconstruction of deformable (or non‐rigid) scenes from a set of monocular 2D
image observations is a long‐standing and actively researched area of computer vision and …

Suds: Scalable urban dynamic scenes

H Turki, JY Zhang, F Ferroni… - Proceedings of the …, 2023 - openaccess.thecvf.com
We extend neural radiance fields (NeRFs) to dynamic large-scale urban scenes. Prior work
tends to reconstruct single video clips of short durations (up to 10 seconds). Two reasons …

Dynibar: Neural dynamic image-based rendering

Z Li, Q Wang, F Cole, R Tucker… - Proceedings of the …, 2023 - openaccess.thecvf.com
We address the problem of synthesizing novel views from a monocular video depicting a
complex dynamic scene. State-of-the-art methods based on temporally varying Neural …

Neus2: Fast learning of neural implicit surfaces for multi-view reconstruction

Y Wang, Q Han, M Habermann… - Proceedings of the …, 2023 - openaccess.thecvf.com
Recent methods for neural surface representation and rendering, for example NeuS, have
demonstrated the remarkably high-quality reconstruction of static scenes. However, the …

Humannerf: Free-viewpoint rendering of moving people from monocular video

CY Weng, B Curless, PP Srinivasan… - Proceedings of the …, 2022 - openaccess.thecvf.com
We introduce a free-viewpoint rendering method--HumanNeRF--that works on a given
monocular video of a human performing complex body motions, eg a video from YouTube …

Direct voxel grid optimization: Super-fast convergence for radiance fields reconstruction

C Sun, M Sun, HT Chen - … of the IEEE/CVF conference on …, 2022 - openaccess.thecvf.com
We present a super-fast convergence approach to reconstructing the per-scene radiance
field from a set of images that capture the scene with known poses. This task, which is often …