How to Protect Copyright Data in Optimization of Large Language Models?
The softmax operator is a crucial component of large language models (LLMs), which have
played a transformative role in computer research. Due to the centrality of the softmax …
played a transformative role in computer research. Due to the centrality of the softmax …
Denoising diffusion autoencoders are unified self-supervised learners
Inspired by recent advances in diffusion models, which are reminiscent of denoising
autoencoders, we investigate whether they can acquire discriminative representations for …
autoencoders, we investigate whether they can acquire discriminative representations for …
On the design fundamentals of diffusion models: A survey
Diffusion models are generative models, which gradually add and remove noise to learn the
underlying distribution of training data for data generation. The components of diffusion …
underlying distribution of training data for data generation. The components of diffusion …
Ovrl-v2: A simple state-of-art baseline for imagenav and objectnav
We present a single neural network architecture composed of task-agnostic components
(ViTs, convolutions, and LSTMs) that achieves state-of-art results on both the ImageNav (" …
(ViTs, convolutions, and LSTMs) that achieves state-of-art results on both the ImageNav (" …
Multi-architecture multi-expert diffusion models
In this paper, we address the performance degradation of efficient diffusion models by
introducing Multi-architecturE Multi-Expert diffusion models (MEME). We identify the need for …
introducing Multi-architecturE Multi-Expert diffusion models (MEME). We identify the need for …
Masked diffusion models are fast learners
Diffusion models have emerged as the de-facto technique for image generation, yet they
entail significant computational overhead, hindering the technique's broader application in …
entail significant computational overhead, hindering the technique's broader application in …
Remix-DiT: Mixing Diffusion Transformers for Multi-Expert Denoising
Transformer-based diffusion models have achieved significant advancements across a
variety of generative tasks. However, producing high-quality outputs typically necessitates …
variety of generative tasks. However, producing high-quality outputs typically necessitates …
[PDF][PDF] A Comprehensive Survey of Image and Video Generative AI: Recent Advances, Variants, and Applications
In recent years, the field of deep learning has experienced a surge in the popularity of
generative models, largely propelled by the transformative influence of Generative …
generative models, largely propelled by the transformative influence of Generative …
IPT-V2: Efficient Image Processing Transformer using Hierarchical Attentions
Recent advances have demonstrated the powerful capability of transformer architecture in
image restoration. However, our analysis indicates that existing transformerbased methods …
image restoration. However, our analysis indicates that existing transformerbased methods …
U-DiTs: Downsample Tokens in U-Shaped Diffusion Transformers
Diffusion Transformers (DiTs) introduce the transformer architecture to diffusion tasks for
latent-space image generation. With an isotropic architecture that chains a series of …
latent-space image generation. With an isotropic architecture that chains a series of …