Nerf: Neural radiance field in 3d vision, a comprehensive review

K Gao, Y Gao, H He, D Lu, L Xu, J Li - arxiv preprint arxiv:2210.00379, 2022‏ - arxiv.org
Neural Radiance Field (NeRF) has recently become a significant development in the field of
Computer Vision, allowing for implicit, neural network-based scene representation and …

Incorporating physics into data-driven computer vision

A Kadambi, C de Melo, CJ Hsieh… - Nature Machine …, 2023‏ - nature.com
Many computer vision techniques infer properties of our physical world from images.
Although images are formed through the physics of light and mechanics, computer vision …

[PDF][PDF] 3d gaussian splatting for real-time radiance field rendering.

B Kerbl, G Kopanas, T Leimkühler, G Drettakis - ACM Trans. Graph., 2023‏ - sgvr.kaist.ac.kr
[CS482] 3D Gaussian Splatting for Real-Time Radiance Field Rendering Page 1 3D Gaussian
Splatting for Real-Time Radiance Field Rendering November 20, 2023 Bernhard Kerbl …

Instruct-nerf2nerf: Editing 3d scenes with instructions

A Haque, M Tancik, AA Efros… - Proceedings of the …, 2023‏ - openaccess.thecvf.com
We propose a method for editing NeRF scenes with text-instructions. Given a NeRF of a
scene and the collection of images used to reconstruct it, our method uses an image …

pixelsplat: 3d gaussian splats from image pairs for scalable generalizable 3d reconstruction

D Charatan, SL Li, A Tagliasacchi… - Proceedings of the …, 2024‏ - openaccess.thecvf.com
We introduce pixelSplat a feed-forward model that learns to reconstruct 3D radiance fields
parameterized by 3D Gaussian primitives from pairs of images. Our model features real-time …

Generative novel view synthesis with 3d-aware diffusion models

ER Chan, K Nagano, MA Chan… - Proceedings of the …, 2023‏ - openaccess.thecvf.com
We present a diffusion-based model for 3D-aware generative novel view synthesis from as
few as a single input image. Our model samples from the distribution of possible renderings …

Grm: Large gaussian reconstruction model for efficient 3d reconstruction and generation

Y Xu, Z Shi, W Yifan, H Chen, C Yang, S Peng… - … on Computer Vision, 2024‏ - Springer
We introduce GRM, a large-scale reconstructor capable of recovering a 3D asset from
sparse-view images in around 0.1 s. GRM is a feed-forward transformer-based model that …

Merf: Memory-efficient radiance fields for real-time view synthesis in unbounded scenes

C Reiser, R Szeliski, D Verbin, P Srinivasan… - ACM Transactions on …, 2023‏ - dl.acm.org
Neural radiance fields enable state-of-the-art photorealistic view synthesis. However,
existing radiance field representations are either too compute-intensive for real-time …

A survey on 3d gaussian splatting

G Chen, W Wang - arxiv preprint arxiv:2401.03890, 2024‏ - arxiv.org
3D Gaussian splatting (GS) has recently emerged as a transformative technique in the realm
of explicit radiance field and computer graphics. This innovative approach, characterized by …

Lion: Latent point diffusion models for 3d shape generation

A Vahdat, F Williams, Z Gojcic… - Advances in …, 2022‏ - proceedings.neurips.cc
Denoising diffusion models (DDMs) have shown promising results in 3D point cloud
synthesis. To advance 3D DDMs and make them useful for digital artists, we require (i) high …