Get3d: A generative model of high quality 3d textured shapes learned from images

J Gao, T Shen, Z Wang, W Chen… - Advances In …, 2022 - proceedings.neurips.cc
As several industries are moving towards modeling massive 3D virtual worlds, the need for
content creation tools that can scale in terms of the quantity, quality, and diversity of 3D …

Lion: Latent point diffusion models for 3d shape generation

A Vahdat, F Williams, Z Gojcic… - Advances in …, 2022 - proceedings.neurips.cc
Denoising diffusion models (DDMs) have shown promising results in 3D point cloud
synthesis. To advance 3D DDMs and make them useful for digital artists, we require (i) high …

Text2mesh: Text-driven neural stylization for meshes

O Michel, R Bar-On, R Liu, S Benaim… - Proceedings of the …, 2022 - openaccess.thecvf.com
In this work, we develop intuitive controls for editing the style of 3D objects. Our framework,
Text2Mesh, stylizes a 3D mesh by predicting color and local geometric details which …

Arf: Artistic radiance fields

K Zhang, N Kolkin, S Bi, F Luan, Z Xu… - … on Computer Vision, 2022 - Springer
We present a method for transferring the artistic features of an arbitrary style image to a 3D
scene. Previous methods that perform 3D stylization on point clouds or meshes are sensitive …

Texfusion: Synthesizing 3d textures with text-guided image diffusion models

T Cao, K Kreis, S Fidler, N Sharp… - Proceedings of the …, 2023 - openaccess.thecvf.com
Abstract We present TexFusion (Texture Diffusion), a new method to synthesize textures for
given 3D geometries, using only large-scale text-guided image diffusion models. In contrast …

NeRF-Art: Text-Driven Neural Radiance Fields Stylization

C Wang, R Jiang, M Chai, M He… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
As a powerful representation of 3D scenes, the neural radiance field (NeRF) enables high-
quality novel view synthesis from multi-view images. Stylizing NeRF, however, remains …

Tango: Text-driven photorealistic and robust 3d stylization via lighting decomposition

Y Chen, R Chen, J Lei, Y Zhang… - Advances in Neural …, 2022 - proceedings.neurips.cc
Creation of 3D content by stylization is a promising yet challenging problem in computer
vision and graphics research. In this work, we focus on stylizing photorealistic appearance …

Ccpl: Contrastive coherence preserving loss for versatile style transfer

Z Wu, Z Zhu, J Du, X Bai - European Conference on Computer Vision, 2022 - Springer
In this paper, we aim to devise a universally versatile style transfer method capable of
performing artistic, photo-realistic, and video style transfer jointly, without seeing videos …

Texture generation on 3d meshes with point-uv diffusion

X Yu, P Dai, W Li, L Ma, Z Liu… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
In this work, we focus on synthesizing high-quality textures on 3D meshes. We present Point-
UV diffusion, a coarse-to-fine pipeline that marries the denoising diffusion model with UV …

Textdeformer: Geometry manipulation using text guidance

W Gao, N Aigerman, T Groueix, V Kim… - ACM SIGGRAPH 2023 …, 2023 - dl.acm.org
We present a technique for automatically producing a deformation of an input triangle mesh,
guided solely by a text prompt. Our framework is capable of deformations that produce both …