Zero-1-to-3: Zero-shot one image to 3d object
Abstract We introduce Zero-1-to-3, a framework for changing the camera viewpoint of an
object given just a single RGB image. To perform novel view synthesis in this …
object given just a single RGB image. To perform novel view synthesis in this …
One-2-3-45: Any single image to 3d mesh in 45 seconds without per-shape optimization
Single image 3D reconstruction is an important but challenging task that requires extensive
knowledge of our natural world. Many existing methods solve this problem by optimizing a …
knowledge of our natural world. Many existing methods solve this problem by optimizing a …
Wonder3d: Single image to 3d using cross-domain diffusion
In this work we introduce Wonder3D a novel method for generating high-fidelity textured
meshes from single-view images with remarkable efficiency. Recent methods based on the …
meshes from single-view images with remarkable efficiency. Recent methods based on the …
Instruct-nerf2nerf: Editing 3d scenes with instructions
We propose a method for editing NeRF scenes with text-instructions. Given a NeRF of a
scene and the collection of images used to reconstruct it, our method uses an image …
scene and the collection of images used to reconstruct it, our method uses an image …
Dreambooth3d: Subject-driven text-to-3d generation
We present DreamBooth3D, an approach to personalize text-to-3D generative models from
as few as 3-6 casually captured images of a subject. Our approach combines recent …
as few as 3-6 casually captured images of a subject. Our approach combines recent …
One-2-3-45++: Fast single image to 3d objects with consistent multi-view generation and 3d diffusion
Recent advancements in open-world 3D object generation have been remarkable with
image-to-3D methods offering superior fine-grained control over their text-to-3D …
image-to-3D methods offering superior fine-grained control over their text-to-3D …
Syncdreamer: Generating multiview-consistent images from a single-view image
In this paper, we present a novel diffusion model called that generates multiview-consistent
images from a single-view image. Using pretrained large-scale 2D diffusion models, recent …
images from a single-view image. Using pretrained large-scale 2D diffusion models, recent …
Text2room: Extracting textured 3d meshes from 2d text-to-image models
Abstract We present Text2Room, a method for generating room-scale textured 3D meshes
from a given text prompt as input. To this end, we leverage pre-trained 2D text-to-image …
from a given text prompt as input. To this end, we leverage pre-trained 2D text-to-image …
Magic123: One image to high-quality 3d object generation using both 2d and 3d diffusion priors
We present Magic123, a two-stage coarse-to-fine approach for high-quality, textured 3D
meshes generation from a single unposed image in the wild using both2D and 3D priors. In …
meshes generation from a single unposed image in the wild using both2D and 3D priors. In …
Triplane meets gaussian splatting: Fast and generalizable single-view 3d reconstruction with transformers
Recent advancements in 3D reconstruction from single images have been driven by the
evolution of generative models. Prominent among these are methods based on Score …
evolution of generative models. Prominent among these are methods based on Score …