Triplane meets gaussian splatting: Fast and generalizable single-view 3d reconstruction with transformers

ZX Zou, Z Yu, YC Guo, Y Li, D Liang… - Proceedings of the …, 2024 - openaccess.thecvf.com
Recent advancements in 3D reconstruction from single images have been driven by the
evolution of generative models. Prominent among these are methods based on Score …

Gvgen: Text-to-3d generation with volumetric representation

X He, J Chen, S Peng, D Huang, Y Li, X Huang… - … on Computer Vision, 2024 - Springer
In recent years, 3D Gaussian splatting has emerged as a powerful technique for 3D
reconstruction and generation, known for its fast and high-quality rendering capabilities …

MVD-Fusion: Single-view 3D via Depth-consistent Multi-view Generation

H Hu, Z Zhou, V Jampani… - Proceedings of the IEEE …, 2024 - openaccess.thecvf.com
We present MVD-Fusion: a method for single-view 3D inference via generative modeling of
multi-view-consistent RGB-D images. While recent methods pursuing 3D inference advocate …

Hi3D: Pursuing High-Resolution Image-to-3D Generation with Video Diffusion Models

H Yang, Y Chen, Y Pan, T Yao, Z Chen… - Proceedings of the …, 2024 - dl.acm.org
Despite having tremendous progress in image-to-3D generation, existing methods still
struggle to produce multi-view consistent images with high-resolution textures in detail …

DiffPano: Scalable and Consistent Text to Panorama Generation with Spherical Epipolar-Aware Diffusion

W Ye, C Ji, Z Chen, J Gao, X Huang, SH Zhang… - arxiv preprint arxiv …, 2024 - arxiv.org
Diffusion-based methods have achieved remarkable achievements in 2D image or 3D
object generation, however, the generation of 3D scenes and even $360^{\circ} $ images …

Mv-adapter: Multi-view consistent image generation made easy

Z Huang, YC Guo, H Wang, R Yi, L Ma, YP Cao… - arxiv preprint arxiv …, 2024 - arxiv.org
Existing multi-view image generation methods often make invasive modifications to pre-
trained text-to-image (T2I) models and require full fine-tuning, leading to (1) high …

3DEnhancer: Consistent Multi-View Diffusion for 3D Enhancement

Y Luo, S Zhou, Y Lan, X Pan, CC Loy - arxiv preprint arxiv:2412.18565, 2024 - arxiv.org
Despite advances in neural rendering, due to the scarcity of high-quality 3D datasets and
the inherent limitations of multi-view diffusion models, view synthesis and 3D model …

Ouroboros3D: Image-to-3D Generation via 3D-aware Recursive Diffusion

H Wen, Z Huang, Y Wang, X Chen, Y Qiao… - arxiv preprint arxiv …, 2024 - arxiv.org
Existing single image-to-3D creation methods typically involve a two-stage process, first
generating multi-view images, and then using these images for 3D reconstruction. However …

From Parts to Whole: A Unified Reference Framework for Controllable Human Image Generation

Z Huang, H Fan, L Wang, L Sheng - arxiv preprint arxiv:2404.15267, 2024 - arxiv.org
Recent advancements in controllable human image generation have led to zero-shot
generation using structural signals (eg, pose, depth) or facial appearance. Yet, generating …

MIDI: Multi-Instance Diffusion for Single Image to 3D Scene Generation

Z Huang, YC Guo, X An, Y Yang, Y Li, ZX Zou… - arxiv preprint arxiv …, 2024 - arxiv.org
This paper introduces MIDI, a novel paradigm for compositional 3D scene generation from a
single image. Unlike existing methods that rely on reconstruction or retrieval techniques or …