One-2-3-45: Any single image to 3d mesh in 45 seconds without per-shape optimization

M Liu, C Xu, H **, L Chen… - Advances in Neural …, 2024 - proceedings.neurips.cc
Single image 3D reconstruction is an important but challenging task that requires extensive
knowledge of our natural world. Many existing methods solve this problem by optimizing a …

One-2-3-45++: Fast single image to 3d objects with consistent multi-view generation and 3d diffusion

M Liu, R Shi, L Chen, Z Zhang, C Xu… - Proceedings of the …, 2024 - openaccess.thecvf.com
Recent advancements in open-world 3D object generation have been remarkable with
image-to-3D methods offering superior fine-grained control over their text-to-3D …

Dreamgaussian: Generative gaussian splatting for efficient 3d content creation

J Tang, J Ren, H Zhou, Z Liu, G Zeng - arxiv preprint arxiv:2309.16653, 2023 - arxiv.org
Recent advances in 3D content creation mostly leverage optimization-based 3D generation
via score distillation sampling (SDS). Though promising results have been exhibited, these …

Text2room: Extracting textured 3d meshes from 2d text-to-image models

L Höllein, A Cao, A Owens… - Proceedings of the …, 2023 - openaccess.thecvf.com
Abstract We present Text2Room, a method for generating room-scale textured 3D meshes
from a given text prompt as input. To this end, we leverage pre-trained 2D text-to-image …

Magic123: One image to high-quality 3d object generation using both 2d and 3d diffusion priors

G Qian, J Mai, A Hamdi, J Ren, A Siarohin, B Li… - arxiv preprint arxiv …, 2023 - arxiv.org
We present Magic123, a two-stage coarse-to-fine approach for high-quality, textured 3D
meshes generation from a single unposed image in the wild using both2D and 3D priors. In …

Econ: Explicit clothed humans optimized via normal integration

Y **u, J Yang, X Cao, D Tzionas… - Proceedings of the …, 2023 - openaccess.thecvf.com
The combination of deep learning, artist-curated scans, and Implicit Functions (IF), is
enabling the creation of detailed, clothed, 3D humans from images. However, existing …

Dreamavatar: Text-and-shape guided 3d human avatar generation via diffusion models

Y Cao, YP Cao, K Han, Y Shan… - Proceedings of the …, 2024 - openaccess.thecvf.com
We present DreamAvatar a text-and-shape guided framework for generating high-quality 3D
human avatars with controllable poses. While encouraging results have been reported by …

Humangaussian: Text-driven 3d human generation with gaussian splatting

X Liu, X Zhan, J Tang, Y Shan, G Zeng… - Proceedings of the …, 2024 - openaccess.thecvf.com
Realistic 3D human generation from text prompts is a desirable yet challenging task.
Existing methods optimize 3D representations like mesh or neural fields via score distillation …

CLAY: A Controllable Large-scale Generative Model for Creating High-quality 3D Assets

L Zhang, Z Wang, Q Zhang, Q Qiu, A Pang… - ACM Transactions on …, 2024 - dl.acm.org
In the realm of digital creativity, our potential to craft intricate 3D worlds from imagination is
often hampered by the limitations of existing digital tools, which demand extensive expertise …

Texfusion: Synthesizing 3d textures with text-guided image diffusion models

T Cao, K Kreis, S Fidler, N Sharp… - Proceedings of the …, 2023 - openaccess.thecvf.com
Abstract We present TexFusion (Texture Diffusion), a new method to synthesize textures for
given 3D geometries, using only large-scale text-guided image diffusion models. In contrast …