Robust dynamic radiance fields

YL Liu, C Gao, A Meuleman… - Proceedings of the …, 2023 - openaccess.thecvf.com
Dynamic radiance field reconstruction methods aim to model the time-varying structure and
appearance of a dynamic scene. Existing methods, however, assume that accurate camera …

Consistent view synthesis with pose-guided diffusion models

HY Tseng, Q Li, C Kim, S Alsisan… - Proceedings of the …, 2023 - openaccess.thecvf.com
Novel view synthesis from a single image has been a cornerstone problem for many Virtual
Reality applications that provide immersive experiences. However, most existing techniques …

The augmented designer: a research agenda for generative AI-enabled design

K Thoring, S Huettemann, RM Mueller - Proceedings of the Design …, 2023 - cambridge.org
Generative AI algorithms that are able to generate creative output are progressing at
tremendous speed. This paper presents a research agenda for Generative AI-based support …

Dynamic view synthesis from dynamic monocular video

C Gao, A Saraf, J Kopf… - Proceedings of the IEEE …, 2021 - openaccess.thecvf.com
We present an algorithm for generating novel views at arbitrary viewpoints and any input
time step given a monocular video of a dynamic scene. Our work builds upon recent …

Space-time neural irradiance fields for free-viewpoint video

W **an, JB Huang, J Kopf… - Proceedings of the IEEE …, 2021 - openaccess.thecvf.com
We present a method that learns a spatiotemporal neural irradiance field for dynamic scenes
from a single video. Our learned representation enables free-viewpoint rendering of the …

Learning neural light fields with ray-space embedding

B Attal, JB Huang, M Zollhöfer… - Proceedings of the …, 2022 - openaccess.thecvf.com
Neural radiance fields (NeRFs) produce state-of-the-art view synthesis results, but are slow
to render, requiring hundreds of network evaluations per pixel to approximate a volume …

Infinitenature-zero: Learning perpetual view generation of natural scenes from single images

Z Li, Q Wang, N Snavely, A Kanazawa - European Conference on …, 2022 - Springer
We present a method for learning to generate unbounded flythrough videos of natural
scenes starting from a single view. This capability is learned from a collection of single …

Consistent-1-to-3: Consistent image to 3d view synthesis via geometry-aware diffusion models

J Ye, P Wang, K Li, Y Shi… - … Conference on 3D Vision …, 2024 - ieeexplore.ieee.org
Zero-shot novel view synthesis (NVS) from a single image is an essential problem in 3D
object understanding. While recent approaches that leverage pre-trained generative models …

360monodepth: High-resolution 360deg monocular depth estimation

M Rey-Area, M Yuan… - Proceedings of the IEEE …, 2022 - openaccess.thecvf.com
Abstract 360deg cameras can capture complete environments in a single shot, which makes
360deg imagery alluring in many computer vision tasks. However, monocular depth …

Geometry-free view synthesis: Transformers and no 3d priors

R Rombach, P Esser, B Ommer - Proceedings of the IEEE …, 2021 - openaccess.thecvf.com
Is a geometric model required to synthesize novel views from a single image? Being bound
to local convolutions, CNNs need explicit 3D biases to model geometric transformations. In …