Get3d: A generative model of high quality 3d textured shapes learned from images
As several industries are moving towards modeling massive 3D virtual worlds, the need for
content creation tools that can scale in terms of the quantity, quality, and diversity of 3D …
content creation tools that can scale in terms of the quantity, quality, and diversity of 3D …
Lion: Latent point diffusion models for 3d shape generation
Denoising diffusion models (DDMs) have shown promising results in 3D point cloud
synthesis. To advance 3D DDMs and make them useful for digital artists, we require (i) high …
synthesis. To advance 3D DDMs and make them useful for digital artists, we require (i) high …
Text2mesh: Text-driven neural stylization for meshes
In this work, we develop intuitive controls for editing the style of 3D objects. Our framework,
Text2Mesh, stylizes a 3D mesh by predicting color and local geometric details which …
Text2Mesh, stylizes a 3D mesh by predicting color and local geometric details which …
Arf: Artistic radiance fields
We present a method for transferring the artistic features of an arbitrary style image to a 3D
scene. Previous methods that perform 3D stylization on point clouds or meshes are sensitive …
scene. Previous methods that perform 3D stylization on point clouds or meshes are sensitive …
Texfusion: Synthesizing 3d textures with text-guided image diffusion models
Abstract We present TexFusion (Texture Diffusion), a new method to synthesize textures for
given 3D geometries, using only large-scale text-guided image diffusion models. In contrast …
given 3D geometries, using only large-scale text-guided image diffusion models. In contrast …
NeRF-Art: Text-Driven Neural Radiance Fields Stylization
As a powerful representation of 3D scenes, the neural radiance field (NeRF) enables high-
quality novel view synthesis from multi-view images. Stylizing NeRF, however, remains …
quality novel view synthesis from multi-view images. Stylizing NeRF, however, remains …
Tango: Text-driven photorealistic and robust 3d stylization via lighting decomposition
Creation of 3D content by stylization is a promising yet challenging problem in computer
vision and graphics research. In this work, we focus on stylizing photorealistic appearance …
vision and graphics research. In this work, we focus on stylizing photorealistic appearance …
Ccpl: Contrastive coherence preserving loss for versatile style transfer
In this paper, we aim to devise a universally versatile style transfer method capable of
performing artistic, photo-realistic, and video style transfer jointly, without seeing videos …
performing artistic, photo-realistic, and video style transfer jointly, without seeing videos …
Texture generation on 3d meshes with point-uv diffusion
In this work, we focus on synthesizing high-quality textures on 3D meshes. We present Point-
UV diffusion, a coarse-to-fine pipeline that marries the denoising diffusion model with UV …
UV diffusion, a coarse-to-fine pipeline that marries the denoising diffusion model with UV …
Textdeformer: Geometry manipulation using text guidance
We present a technique for automatically producing a deformation of an input triangle mesh,
guided solely by a text prompt. Our framework is capable of deformations that produce both …
guided solely by a text prompt. Our framework is capable of deformations that produce both …