Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Dynamical variational autoencoders: A comprehensive review
Variational autoencoders (VAEs) are powerful deep generative models widely used to
represent high-dimensional complex data through a low-dimensional latent space learned …
represent high-dimensional complex data through a low-dimensional latent space learned …
A survey of unsupervised generative models for exploratory data analysis and representation learning
For more than a century, the methods for data representation and the exploration of the
intrinsic structures of data have developed remarkably and consist of supervised and …
intrinsic structures of data have developed remarkably and consist of supervised and …
Diffusion-based generation, optimization, and planning in 3d scenes
We introduce SceneDiffuser, a conditional generative model for 3D scene understanding.
SceneDiffuser provides a unified model for solving scene-conditioned generation …
SceneDiffuser provides a unified model for solving scene-conditioned generation …
Versatile diffusion: Text, images and variations all in one diffusion model
Recent advances in diffusion models have set an impressive milestone in many generation
tasks, and trending works such as DALL-E2, Imagen, and Stable Diffusion have attracted …
tasks, and trending works such as DALL-E2, Imagen, and Stable Diffusion have attracted …
Cogview: Mastering text-to-image generation via transformers
Text-to-Image generation in the general domain has long been an open problem, which
requires both a powerful generative model and cross-modal understanding. We propose …
requires both a powerful generative model and cross-modal understanding. We propose …
Implicit generation and modeling with energy based models
Energy based models (EBMs) are appealing due to their generality and simplicity in
likelihood modeling, but have been traditionally difficult to train. We present techniques to …
likelihood modeling, but have been traditionally difficult to train. We present techniques to …
Cyclical annealing schedule: A simple approach to mitigating kl vanishing
Variational autoencoders (VAEs) with an auto-regressive decoder have been applied for
many natural language processing (NLP) tasks. The VAE objective consists of two terms,(i) …
many natural language processing (NLP) tasks. The VAE objective consists of two terms,(i) …
Learning generative vision transformer with energy-based latent space for saliency prediction
Vision transformer networks have shown superiority in many computer vision tasks. In this
paper, we take a step further by proposing a novel generative vision transformer with latent …
paper, we take a step further by proposing a novel generative vision transformer with latent …
Dlow: Diversifying latent flows for diverse human motion prediction
Deep generative models are often used for human motion prediction as they are able to
model multi-modal data distributions and characterize diverse human behavior. While much …
model multi-modal data distributions and characterize diverse human behavior. While much …
Uncertainty inspired RGB-D saliency detection
We propose the first stochastic framework to employ uncertainty for RGB-D saliency
detection by learning from the data labeling process. Existing RGB-D saliency detection …
detection by learning from the data labeling process. Existing RGB-D saliency detection …