Neural spline fields for burst image fusion and layer separation

I Chugunov, D Shustin, R Yan… - Proceedings of the …, 2024 - openaccess.thecvf.com
Each photo in an image burst can be considered a sample of a complex 3D scene: the
product of parallax diffuse and specular materials scene motion and illuminant variation …

Transient neural radiance fields for lidar view synthesis and 3d reconstruction

A Malik, P Mirdehghan, S Nousias… - Advances in …, 2024 - proceedings.neurips.cc
Neural radiance fields (NeRFs) have become a ubiquitous tool for modeling scene
appearance and geometry from multiview imagery. Recent work has also begun to explore …

Gensdf: Two-stage learning of generalizable signed distance functions

G Chou, I Chugunov, F Heide - Advances in Neural …, 2022 - proceedings.neurips.cc
We investigate the generalization capabilities of neural signed distance functions (SDFs) for
learning 3D object representations for unseen and unlabeled point clouds. Existing methods …

Aoneus: A neural rendering framework for acoustic-optical sensor fusion

M Qadri, K Zhang, A Hinduja, M Kaess… - ACM SIGGRAPH 2024 …, 2024 - dl.acm.org
Underwater perception and 3D surface reconstruction are challenging problems with broad
applications in construction, security, marine archaeology, and environmental monitoring …

Rapid network adaptation: Learning to adapt neural networks using test-time feedback

T Yeo, OF Kar, Z Sodagar… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
We propose a method for adapting neural networks to distribution shifts at test-time. In
contrast to training-time robustness mechanisms that attempt to anticipate the shift, we …

Neural Fields for Structured Lighting

A Shandilya, B Attal, C Richardt… - Proceedings of the …, 2023 - openaccess.thecvf.com
We present an image formation model and optimization procedure that combines the
advantages of neural radiance fields and structured light imaging. Existing depth-supervised …

TurboSL: Dense Accurate and Fast 3D by Neural Inverse Structured Light

P Mirdehghan, M Wu, W Chen… - Proceedings of the …, 2024 - openaccess.thecvf.com
We show how to turn a noisy and fragile active triangulation technique--three-pattern
structured light with a grayscale camera--into a fast and powerful tool for 3D capture: able to …

Shakes on a plane: Unsupervised depth estimation from unstabilized photography

I Chugunov, Y Zhang, F Heide - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Modern mobile burst photography pipelines capture and merge a short sequence of frames
to recover an enhanced image, but often disregard the 3D nature of the scene they capture …

Instantaneous Perception of Moving Objects in 3D

D Liu, B Zhuang, DN Metaxas… - Proceedings of the …, 2024 - openaccess.thecvf.com
The perception of 3D motion of surrounding traffic participants is crucial for driving safety.
While existing works primarily focus on general large motions we contend that the …

Consistent direct time-of-flight video depth super-resolution

Z Sun, W Ye, J **ong, G Choe… - Proceedings of the …, 2023 - openaccess.thecvf.com
Direct time-of-flight (dToF) sensors are promising for next-generation on-device 3D sensing.
However, limited by manufacturing capabilities in a compact module, the dToF data has low …