Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Rb-modulation: Training-free personalization of diffusion models using stochastic optimal control
L Rout, Y Chen, N Ruiz, A Kumar, C Caramanis… - arxiv preprint arxiv …, 2024 - arxiv.org
We propose Reference-Based Modulation (RB-Modulation), a new plug-and-play solution
for training-free personalization of diffusion models. Existing training-free approaches exhibit …
for training-free personalization of diffusion models. Existing training-free approaches exhibit …
Instantstyle-plus: Style transfer with content-preserving in text-to-image generation
Style transfer is an inventive process designed to create an image that maintains the
essence of the original while embracing the visual style of another. Although diffusion …
essence of the original while embracing the visual style of another. Although diffusion …
StyleTex: Style Image-Guided Texture Generation for 3D Models
Style-guided texture generation aims to generate a texture that is harmonious with both the
style of the reference image and the geometry of the input mesh, given a reference style …
style of the reference image and the geometry of the input mesh, given a reference style …
From parts to whole: A unified reference framework for controllable human image generation
Recent advancements in controllable human image generation have led to zero-shot
generation using structural signals (eg, pose, depth) or facial appearance. Yet, generating …
generation using structural signals (eg, pose, depth) or facial appearance. Yet, generating …
MagicTailor: Component-Controllable Personalization in Text-to-Image Diffusion Models
Recent advancements in text-to-image (T2I) diffusion models have enabled the creation of
high-quality images from text prompts, but they still struggle to generate images with precise …
high-quality images from text prompts, but they still struggle to generate images with precise …
Dream-in-Style: Text-to-3D Generation Using Stylized Score Distillation
H Kompanowski, BS Hua - arxiv preprint arxiv:2406.18581, 2024 - arxiv.org
We present a method to generate 3D objects in styles. Our method takes a text prompt and a
style reference image as input and reconstructs a neural radiance field to synthesize a 3D …
style reference image as input and reconstructs a neural radiance field to synthesize a 3D …
AttenCraft: Attention-guided Disentanglement of Multiple Concepts for Text-to-Image Customization
J Shentu, M Watson, NA Moubayed - arxiv preprint arxiv:2405.17965, 2024 - arxiv.org
With the unprecedented performance being achieved by text-to-image (T2I) diffusion
models, T2I customization further empowers users to tailor the diffusion model to new …
models, T2I customization further empowers users to tailor the diffusion model to new …
FAM Diffusion: Frequency and Attention Modulation for High-Resolution Image Generation with Stable Diffusion
Diffusion models are proficient at generating high-quality images. They are however
effective only when operating at the resolution used during training. Inference at a scaled …
effective only when operating at the resolution used during training. Inference at a scaled …
Bringing Characters to New Stories: Training-Free Theme-Specific Image Generation via Dynamic Visual Prompting
The stories and characters that captivate us as we grow up shape unique fantasy worlds,
with images serving as the primary medium for visually experiencing these realms …
with images serving as the primary medium for visually experiencing these realms …
ReEdit: Multimodal Exemplar-Based Image Editing with Diffusion Models
A Srivastava, TR Menta, A Java, A Jadhav… - arxiv preprint arxiv …, 2024 - arxiv.org
Modern Text-to-Image (T2I) Diffusion models have revolutionized image editing by enabling
the generation of high-quality photorealistic images. While the de facto method for …
the generation of high-quality photorealistic images. While the de facto method for …