Text-guided texturing by synchronized multi-view diffusion
This paper introduces a novel approach to synthesize texture to dress up a 3D object, given
a text prompt. Based on the pre-trained text-to-image (T2I) diffusion model, existing methods …
a text prompt. Based on the pre-trained text-to-image (T2I) diffusion model, existing methods …
Dreammat: High-quality pbr material generation with geometry-and light-aware diffusion models
Recent advancements in 2D diffusion models allow appearance generation on untextured
raw meshes. These methods create RGB textures by distilling a 2D diffusion model, which …
raw meshes. These methods create RGB textures by distilling a 2D diffusion model, which …
Blenderalchemy: Editing 3d graphics with vision-language models
Graphics design is important for various applications, including movie production and game
design. To create a high-quality scene, designers usually need to spend hours in software …
design. To create a high-quality scene, designers usually need to spend hours in software …
Shapegpt: 3d shape generation with a unified multi-modal language model
The advent of large language models, which enable flexibility through instruction-driven
approaches, has revolutionized many traditional generative tasks, but large models for 3D …
approaches, has revolutionized many traditional generative tasks, but large models for 3D …
Dilightnet: Fine-grained lighting control for diffusion-based image generation
This paper presents a novel method for exerting fine-grained lighting control during text-
driven diffusion-based image generation. While existing diffusion models already have the …
driven diffusion-based image generation. While existing diffusion models already have the …
Meta 3d texturegen: Fast and consistent texture generation for 3d objects
The recent availability and adaptability of text-to-image models has sparked a new era in
many related domains that benefit from the learned text priors as well as high-quality and …
many related domains that benefit from the learned text priors as well as high-quality and …
Cascade-zero123: One image to highly consistent 3d with self-prompted nearby views
Synthesizing multi-view 3D from one single image is a significant but challenging task. Zero-
1-to-3 methods have achieved great success by lifting a 2D latent diffusion model to the 3D …
1-to-3 methods have achieved great success by lifting a 2D latent diffusion model to the 3D …
Mapa: Text-driven photorealistic material painting for 3d shapes
This paper aims to generate materials for 3D meshes from text descriptions. Unlike existing
methods that synthesize texture maps, we propose to generate segment-wise procedural …
methods that synthesize texture maps, we propose to generate segment-wise procedural …
Meta 3d gen
We introduce Meta 3D Gen (3DGen), a new state-of-the-art, fast pipeline for text-to-3D asset
generation. 3DGen offers 3D asset creation with high prompt fidelity and high-quality 3D …
generation. 3DGen offers 3D asset creation with high prompt fidelity and high-quality 3D …
Roomtex: Texturing compositional indoor scenes via iterative inpainting
The advancement of diffusion models has pushed the boundary of text-to-3D object
generation. While it is straightforward to composite objects into a scene with reasonable …
generation. While it is straightforward to composite objects into a scene with reasonable …