Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
A survey on video diffusion models
The recent wave of AI-generated content (AIGC) has witnessed substantial success in
computer vision, with the diffusion model playing a crucial role in this achievement. Due to …
computer vision, with the diffusion model playing a crucial role in this achievement. Due to …
Sora: A review on background, technology, limitations, and opportunities of large vision models
Sora is a text-to-video generative AI model, released by OpenAI in February 2024. The
model is trained to generate videos of realistic or imaginative scenes from text instructions …
model is trained to generate videos of realistic or imaginative scenes from text instructions …
Videopoet: A large language model for zero-shot video generation
We present VideoPoet, a language model capable of synthesizing high-quality video, with
matching audio, from a large variety of conditioning signals. VideoPoet employs a decoder …
matching audio, from a large variety of conditioning signals. VideoPoet employs a decoder …
Vbench: Comprehensive benchmark suite for video generative models
Video generation has witnessed significant advancements yet evaluating these models
remains a challenge. A comprehensive evaluation benchmark for video generation is …
remains a challenge. A comprehensive evaluation benchmark for video generation is …
Dreamvideo: Composing your dream videos with customized subject and motion
Customized generation using diffusion models has made impressive progress in image
generation but remains unsatisfactory in the challenging video generation task as it requires …
generation but remains unsatisfactory in the challenging video generation task as it requires …
Direct-a-video: Customized video generation with user-directed camera movement and object motion
Recent text-to-video diffusion models have achieved impressive progress. In practice, users
often desire the ability to control object motion and camera movement independently for …
often desire the ability to control object motion and camera movement independently for …
Ccedit: Creative and controllable video editing via diffusion models
In this paper we present CCEdit a versatile generative video editing framework based on
diffusion models. Our approach employs a novel trident network structure that separates …
diffusion models. Our approach employs a novel trident network structure that separates …
Space-time diffusion features for zero-shot text-driven motion transfer
We present a new method for text-driven motion transfer-synthesizing a video that complies
with an input text prompt describing the target objects and scene while maintaining an input …
with an input text prompt describing the target objects and scene while maintaining an input …
Videobooth: Diffusion-based video generation with image prompts
Text-driven video generation witnesses rapid progress. However merely using text prompts
is not enough to depict the desired subject appearance that accurately aligns with users' …
is not enough to depict the desired subject appearance that accurately aligns with users' …
Customize-a-video: One-shot motion customization of text-to-video diffusion models
Image customization has been extensively studied in text-to-image (T2I) diffusion models,
leading to impressive outcomes and applications. With the emergence of text-to-video (T2V) …
leading to impressive outcomes and applications. With the emergence of text-to-video (T2V) …