Zero-1-to-3: Zero-shot one image to 3d object

R Liu, R Wu, B Van Hoorick… - Proceedings of the …, 2023 - openaccess.thecvf.com
Abstract We introduce Zero-1-to-3, a framework for changing the camera viewpoint of an
object given just a single RGB image. To perform novel view synthesis in this …

One-2-3-45: Any single image to 3d mesh in 45 seconds without per-shape optimization

M Liu, C Xu, H **, L Chen… - Advances in Neural …, 2024 - proceedings.neurips.cc
Single image 3D reconstruction is an important but challenging task that requires extensive
knowledge of our natural world. Many existing methods solve this problem by optimizing a …

Wonder3d: Single image to 3d using cross-domain diffusion

X Long, YC Guo, C Lin, Y Liu, Z Dou… - Proceedings of the …, 2024 - openaccess.thecvf.com
In this work we introduce Wonder3D a novel method for generating high-fidelity textured
meshes from single-view images with remarkable efficiency. Recent methods based on the …

Instruct-nerf2nerf: Editing 3d scenes with instructions

A Haque, M Tancik, AA Efros… - Proceedings of the …, 2023 - openaccess.thecvf.com
We propose a method for editing NeRF scenes with text-instructions. Given a NeRF of a
scene and the collection of images used to reconstruct it, our method uses an image …

Dreambooth3d: Subject-driven text-to-3d generation

A Raj, S Kaza, B Poole, M Niemeyer… - Proceedings of the …, 2023 - openaccess.thecvf.com
We present DreamBooth3D, an approach to personalize text-to-3D generative models from
as few as 3-6 casually captured images of a subject. Our approach combines recent …

One-2-3-45++: Fast single image to 3d objects with consistent multi-view generation and 3d diffusion

M Liu, R Shi, L Chen, Z Zhang, C Xu… - Proceedings of the …, 2024 - openaccess.thecvf.com
Recent advancements in open-world 3D object generation have been remarkable with
image-to-3D methods offering superior fine-grained control over their text-to-3D …

Syncdreamer: Generating multiview-consistent images from a single-view image

Y Liu, C Lin, Z Zeng, X Long, L Liu, T Komura… - arxiv preprint arxiv …, 2023 - arxiv.org
In this paper, we present a novel diffusion model called that generates multiview-consistent
images from a single-view image. Using pretrained large-scale 2D diffusion models, recent …

Text2room: Extracting textured 3d meshes from 2d text-to-image models

L Höllein, A Cao, A Owens… - Proceedings of the …, 2023 - openaccess.thecvf.com
Abstract We present Text2Room, a method for generating room-scale textured 3D meshes
from a given text prompt as input. To this end, we leverage pre-trained 2D text-to-image …

Magic123: One image to high-quality 3d object generation using both 2d and 3d diffusion priors

G Qian, J Mai, A Hamdi, J Ren, A Siarohin, B Li… - arxiv preprint arxiv …, 2023 - arxiv.org
We present Magic123, a two-stage coarse-to-fine approach for high-quality, textured 3D
meshes generation from a single unposed image in the wild using both2D and 3D priors. In …

Triplane meets gaussian splatting: Fast and generalizable single-view 3d reconstruction with transformers

ZX Zou, Z Yu, YC Guo, Y Li, D Liang… - Proceedings of the …, 2024 - openaccess.thecvf.com
Recent advancements in 3D reconstruction from single images have been driven by the
evolution of generative models. Prominent among these are methods based on Score …