Prolificdreamer: High-fidelity and diverse text-to-3d generation with variational score distillation
Score distillation sampling (SDS) has shown great promise in text-to-3D generation by
distilling pretrained large-scale text-to-image diffusion models, but suffers from over …
distilling pretrained large-scale text-to-image diffusion models, but suffers from over …
Wonder3d: Single image to 3d using cross-domain diffusion
In this work we introduce Wonder3D a novel method for generating high-fidelity textured
meshes from single-view images with remarkable efficiency. Recent methods based on the …
meshes from single-view images with remarkable efficiency. Recent methods based on the …
Text-to-3d using gaussian splatting
Automatic text-to-3D generation that combines Score Distillation Sampling (SDS) with the
optimization of volume rendering has achieved remarkable progress in synthesizing realistic …
optimization of volume rendering has achieved remarkable progress in synthesizing realistic …
One-2-3-45++: Fast single image to 3d objects with consistent multi-view generation and 3d diffusion
Recent advancements in open-world 3D object generation have been remarkable with
image-to-3D methods offering superior fine-grained control over their text-to-3D …
image-to-3D methods offering superior fine-grained control over their text-to-3D …
Syncdreamer: Generating multiview-consistent images from a single-view image
In this paper, we present a novel diffusion model called that generates multiview-consistent
images from a single-view image. Using pretrained large-scale 2D diffusion models, recent …
images from a single-view image. Using pretrained large-scale 2D diffusion models, recent …
Dreamgaussian: Generative gaussian splatting for efficient 3d content creation
Recent advances in 3D content creation mostly leverage optimization-based 3D generation
via score distillation sampling (SDS). Though promising results have been exhibited, these …
via score distillation sampling (SDS). Though promising results have been exhibited, these …
Text2room: Extracting textured 3d meshes from 2d text-to-image models
Abstract We present Text2Room, a method for generating room-scale textured 3D meshes
from a given text prompt as input. To this end, we leverage pre-trained 2D text-to-image …
from a given text prompt as input. To this end, we leverage pre-trained 2D text-to-image …
Gaussiandreamer: Fast generation from text to 3d gaussians by bridging 2d and 3d diffusion models
In recent times the generation of 3D assets from text prompts has shown impressive results.
Both 2D and 3D diffusion models can help generate decent 3D objects based on prompts …
Both 2D and 3D diffusion models can help generate decent 3D objects based on prompts …
Gaussianeditor: Swift and controllable 3d editing with gaussian splatting
Abstract 3D editing plays a crucial role in many areas such as gaming and virtual reality.
Traditional 3D editing methods which rely on representations like meshes and point clouds …
Traditional 3D editing methods which rely on representations like meshes and point clouds …
Dreamavatar: Text-and-shape guided 3d human avatar generation via diffusion models
We present DreamAvatar a text-and-shape guided framework for generating high-quality 3D
human avatars with controllable poses. While encouraging results have been reported by …
human avatars with controllable poses. While encouraging results have been reported by …