Robust dynamic radiance fields
Dynamic radiance field reconstruction methods aim to model the time-varying structure and
appearance of a dynamic scene. Existing methods, however, assume that accurate camera …
appearance of a dynamic scene. Existing methods, however, assume that accurate camera …
Consistent view synthesis with pose-guided diffusion models
Novel view synthesis from a single image has been a cornerstone problem for many Virtual
Reality applications that provide immersive experiences. However, most existing techniques …
Reality applications that provide immersive experiences. However, most existing techniques …
The augmented designer: a research agenda for generative AI-enabled design
Generative AI algorithms that are able to generate creative output are progressing at
tremendous speed. This paper presents a research agenda for Generative AI-based support …
tremendous speed. This paper presents a research agenda for Generative AI-based support …
Dynamic view synthesis from dynamic monocular video
We present an algorithm for generating novel views at arbitrary viewpoints and any input
time step given a monocular video of a dynamic scene. Our work builds upon recent …
time step given a monocular video of a dynamic scene. Our work builds upon recent …
Space-time neural irradiance fields for free-viewpoint video
We present a method that learns a spatiotemporal neural irradiance field for dynamic scenes
from a single video. Our learned representation enables free-viewpoint rendering of the …
from a single video. Our learned representation enables free-viewpoint rendering of the …
Learning neural light fields with ray-space embedding
Neural radiance fields (NeRFs) produce state-of-the-art view synthesis results, but are slow
to render, requiring hundreds of network evaluations per pixel to approximate a volume …
to render, requiring hundreds of network evaluations per pixel to approximate a volume …
Infinitenature-zero: Learning perpetual view generation of natural scenes from single images
We present a method for learning to generate unbounded flythrough videos of natural
scenes starting from a single view. This capability is learned from a collection of single …
scenes starting from a single view. This capability is learned from a collection of single …
Consistent-1-to-3: Consistent image to 3d view synthesis via geometry-aware diffusion models
Zero-shot novel view synthesis (NVS) from a single image is an essential problem in 3D
object understanding. While recent approaches that leverage pre-trained generative models …
object understanding. While recent approaches that leverage pre-trained generative models …
360monodepth: High-resolution 360deg monocular depth estimation
Abstract 360deg cameras can capture complete environments in a single shot, which makes
360deg imagery alluring in many computer vision tasks. However, monocular depth …
360deg imagery alluring in many computer vision tasks. However, monocular depth …
Geometry-free view synthesis: Transformers and no 3d priors
Is a geometric model required to synthesize novel views from a single image? Being bound
to local convolutions, CNNs need explicit 3D biases to model geometric transformations. In …
to local convolutions, CNNs need explicit 3D biases to model geometric transformations. In …