Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Efficient diffusion models: A comprehensive survey from principles to practices
Z Ma, Y Zhang, G Jia, L Zhao, Y Ma, M Ma… - arxiv preprint arxiv …, 2024 - arxiv.org
As one of the most popular and sought-after generative models in the recent years, diffusion
models have sparked the interests of many researchers and steadily shown excellent …
models have sparked the interests of many researchers and steadily shown excellent …
ZeST: Zero-Shot Material Transfer from a Single Image
We propose ZeST, a method for zero-shot material transfer to an object in the input image
given a material exemplar image. ZeST leverages existing diffusion adapters to extract …
given a material exemplar image. ZeST leverages existing diffusion adapters to extract …
Garment3dgen: 3d garment stylization and texture generation
We introduce Garment3DGen a new method to synthesize 3D garment assets from a base
mesh given a single input image as guidance. Our proposed approach allows users to …
mesh given a single input image as guidance. Our proposed approach allows users to …
A survey on personalized content synthesis with diffusion models
Recent advancements in generative models have significantly impacted content creation,
leading to the emergence of Personalized Content Synthesis (PCS). With a small set of user …
leading to the emergence of Personalized Content Synthesis (PCS). With a small set of user …
Mapa: Text-driven photorealistic material painting for 3d shapes
This paper aims to generate materials for 3D meshes from text descriptions. Unlike existing
methods that synthesize texture maps, we propose to generate segment-wise procedural …
methods that synthesize texture maps, we propose to generate segment-wise procedural …
Roomtex: Texturing compositional indoor scenes via iterative inpainting
The advancement of diffusion models has pushed the boundary of text-to-3D object
generation. While it is straightforward to composite objects into a scene with reasonable …
generation. While it is straightforward to composite objects into a scene with reasonable …
Colorpeel: Color prompt learning with diffusion models via color and shape disentanglement
Abstract Text-to-Image (T2I) generation has made significant advancements with the advent
of diffusion models. These models exhibit remarkable abilities to produce images based on …
of diffusion models. These models exhibit remarkable abilities to produce images based on …
Phidias: A generative model for creating 3d content from text, image, and 3d conditions with reference-augmented diffusion
In 3D modeling, designers often use an existing 3D model as a reference to create new
ones. This practice has inspired the development of Phidias, a novel generative model that …
ones. This practice has inspired the development of Phidias, a novel generative model that …
MatAtlas: Text-driven Consistent Geometry Texturing and Material Assignment
We present MatAtlas, a method for consistent text-guided 3D model texturing. Following
recent progress we leverage a large scale text-to-image generation model (eg, Stable …
recent progress we leverage a large scale text-to-image generation model (eg, Stable …
StyleTex: Style Image-Guided Texture Generation for 3D Models
Style-guided texture generation aims to generate a texture that is harmonious with both the
style of the reference image and the geometry of the input mesh, given a reference style …
style of the reference image and the geometry of the input mesh, given a reference style …