2d gaussian splatting for geometrically accurate radiance fields

B Huang, Z Yu, A Chen, A Geiger, S Gao - ACM SIGGRAPH 2024 …, 2024 - dl.acm.org
3D Gaussian Splatting (3DGS) has recently revolutionized radiance field reconstruction,
achieving high quality novel view synthesis and fast rendering speed. However, 3DGS fails …

Neuralangelo: High-fidelity neural surface reconstruction

Z Li, T Müller, A Evans, RH Taylor… - Proceedings of the …, 2023 - openaccess.thecvf.com
Neural surface reconstruction has been shown to be powerful for recovering dense 3D
surfaces via image-based neural rendering. However, current methods struggle to recover …

Sv3d: Novel multi-view synthesis and 3d generation from a single image using latent video diffusion

V Voleti, CH Yao, M Boss, A Letts, D Pankratz… - … on Computer Vision, 2024 - Springer
Abstract We present Stable Video 3D (SV3D)—a latent video diffusion model for high-
resolution, image-to-multi-view generation of orbital videos around a 3D object. Recent …

Mip-splatting: Alias-free 3d gaussian splatting

Z Yu, A Chen, B Huang, T Sattler… - Proceedings of the …, 2024 - openaccess.thecvf.com
Abstract Recently 3D Gaussian Splatting has demonstrated impressive novel view synthesis
results reaching high fidelity and efficiency. However strong artifacts can be observed when …

Unisim: A neural closed-loop sensor simulator

Z Yang, Y Chen, J Wang… - Proceedings of the …, 2023 - openaccess.thecvf.com
Rigorously testing autonomy systems is essential for making safe self-driving vehicles (SDV)
a reality. It requires one to generate safety critical scenarios beyond what can be collected …

Mvsplat: Efficient 3d gaussian splatting from sparse multi-view images

Y Chen, H Xu, C Zheng, B Zhuang, M Pollefeys… - … on Computer Vision, 2024 - Springer
We introduce MVSplat, an efficient model that, given sparse multi-view images as input,
predicts clean feed-forward 3D Gaussians. To accurately localize the Gaussian centers, we …

Magic123: One image to high-quality 3d object generation using both 2d and 3d diffusion priors

G Qian, J Mai, A Hamdi, J Ren, A Siarohin, B Li… - arxiv preprint arxiv …, 2023 - arxiv.org
We present Magic123, a two-stage coarse-to-fine approach for high-quality, textured 3D
meshes generation from a single unposed image in the wild using both2D and 3D priors. In …

Omniobject3d: Large-vocabulary 3d object dataset for realistic perception, reconstruction and generation

T Wu, J Zhang, X Fu, Y Wang, J Ren… - Proceedings of the …, 2023 - openaccess.thecvf.com
Recent advances in modeling 3D objects mostly rely on synthetic datasets due to the lack of
large-scale real-scanned 3D databases. To facilitate the development of 3D perception …

Sparsenerf: Distilling depth ranking for few-shot novel view synthesis

G Wang, Z Chen, CC Loy, Z Liu - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Abstract Neural Radiance Field (NeRF) significantly degrades when only a limited number
of views are available. To complement the lack of 3D information, depth-based models, such …

Relightable 3d gaussians: Realistic point cloud relighting with brdf decomposition and ray tracing

J Gao, C Gu, Y Lin, Z Li, H Zhu, X Cao, L Zhang… - … on Computer Vision, 2024 - Springer
In this paper, we present a novel differentiable point-based rendering framework to achieve
photo-realistic relighting. To make the reconstructed scene relightable, we enhance vanilla …