Efficient diffusion models: A comprehensive survey from principles to practices

Z Ma, Y Zhang, G Jia, L Zhao, Y Ma, M Ma… - arxiv preprint arxiv …, 2024 - arxiv.org
As one of the most popular and sought-after generative models in the recent years, diffusion
models have sparked the interests of many researchers and steadily shown excellent …

ZeST: Zero-Shot Material Transfer from a Single Image

TY Cheng, P Sharma, A Markham, N Trigoni… - … on Computer Vision, 2024 - Springer
We propose ZeST, a method for zero-shot material transfer to an object in the input image
given a material exemplar image. ZeST leverages existing diffusion adapters to extract …

Garment3dgen: 3d garment stylization and texture generation

N Sarafianos, T Stuyck, X **ang, Y Li, J Popovic… - arxiv preprint arxiv …, 2024 - arxiv.org
We introduce Garment3DGen a new method to synthesize 3D garment assets from a base
mesh given a single input image as guidance. Our proposed approach allows users to …

A survey on personalized content synthesis with diffusion models

X Zhang, XY Wei, W Zhang, J Wu, Z Zhang… - arxiv preprint arxiv …, 2024 - arxiv.org
Recent advancements in generative models have significantly impacted content creation,
leading to the emergence of Personalized Content Synthesis (PCS). With a small set of user …

Mapa: Text-driven photorealistic material painting for 3d shapes

S Zhang, S Peng, T Xu, Y Yang, T Chen… - ACM SIGGRAPH 2024 …, 2024 - dl.acm.org
This paper aims to generate materials for 3D meshes from text descriptions. Unlike existing
methods that synthesize texture maps, we propose to generate segment-wise procedural …

Roomtex: Texturing compositional indoor scenes via iterative inpainting

Q Wang, R Lu, X Xu, J Wang, MY Wang, B Dai… - … on Computer Vision, 2024 - Springer
The advancement of diffusion models has pushed the boundary of text-to-3D object
generation. While it is straightforward to composite objects into a scene with reasonable …

Colorpeel: Color prompt learning with diffusion models via color and shape disentanglement

MA Butt, K Wang, J Vazquez-Corral… - European Conference on …, 2024 - Springer
Abstract Text-to-Image (T2I) generation has made significant advancements with the advent
of diffusion models. These models exhibit remarkable abilities to produce images based on …

Phidias: A generative model for creating 3d content from text, image, and 3d conditions with reference-augmented diffusion

Z Wang, T Wang, Z He, G Hancke, Z Liu… - arxiv preprint arxiv …, 2024 - arxiv.org
In 3D modeling, designers often use an existing 3D model as a reference to create new
ones. This practice has inspired the development of Phidias, a novel generative model that …

MatAtlas: Text-driven Consistent Geometry Texturing and Material Assignment

D Ceylan, V Deschaintre, T Groueix, R Martin… - arxiv preprint arxiv …, 2024 - arxiv.org
We present MatAtlas, a method for consistent text-guided 3D model texturing. Following
recent progress we leverage a large scale text-to-image generation model (eg, Stable …

StyleTex: Style Image-Guided Texture Generation for 3D Models

Z **e, Y Zhang, X Tang, Y Wu, D Chen, G Li… - ACM Transactions on …, 2024 - dl.acm.org
Style-guided texture generation aims to generate a texture that is harmonious with both the
style of the reference image and the geometry of the input mesh, given a reference style …