Look-m: Look-once optimization in kv cache for efficient multimodal long-context inference

Z Wan, Z Wu, C Liu, J Huang, Z Zhu, P **… - arxiv preprint arxiv …, 2024 - arxiv.org
Long-context Multimodal Large Language Models (MLLMs) demand substantial
computational resources for inference as the growth of their multimodal Key-Value (KV) …

In-context lora for diffusion transformers

L Huang, W Wang, ZF Wu, Y Shi, H Dou… - arxiv preprint arxiv …, 2024 - arxiv.org
Recent research arxiv: 2410.15027 has explored the use of diffusion transformers (DiTs) for
task-agnostic image generation by simply concatenating attention tokens across images …

Muma-tom: Multi-modal multi-agent theory of mind

H Shi, S Ye, X Fang, C **, L Isik, YL Kuo… - arxiv preprint arxiv …, 2024 - arxiv.org
Understanding people's social interactions in complex real-world scenarios often relies on
intricate mental reasoning. To truly understand how and why people interact with one …

A survey on multimodal benchmarks: In the era of large ai models

L Li, G Chen, H Shi, J **ao, L Chen - arxiv preprint arxiv:2409.18142, 2024 - arxiv.org
The rapid evolution of Multimodal Large Language Models (MLLMs) has brought substantial
advancements in artificial intelligence, significantly enhancing the capability to understand …

Acdc: Autoregressive coherent multimodal generation using diffusion correction

H Chung, D Lee, JC Ye - arxiv preprint arxiv:2410.04721, 2024 - arxiv.org
Autoregressive models (ARMs) and diffusion models (DMs) represent two leading
paradigms in generative modeling, each excelling in distinct areas: ARMs in global context …

When Attention Sink Emerges in Language Models: An Empirical View

X Gu, T Pang, C Du, Q Liu, F Zhang, C Du… - arxiv preprint arxiv …, 2024 - arxiv.org
Language Models (LMs) assign significant attention to the first token, even if it is not
semantically important, which is known as attention sink. This phenomenon has been widely …

DiffSensei: Bridging Multi-Modal LLMs and Diffusion Models for Customized Manga Generation

J Wu, C Tang, J Wang, Y Zeng, X Li, Y Tong - arxiv preprint arxiv …, 2024 - arxiv.org
Story visualization, the task of creating visual narratives from textual descriptions, has seen
progress with text-to-image generation models. However, these models often lack effective …

Trans4D: Realistic Geometry-Aware Transition for Compositional Text-to-4D Synthesis

B Zeng, L Yang, S Li, J Liu, Z Zhang, J Tian… - arxiv preprint arxiv …, 2024 - arxiv.org
Recent advances in diffusion models have demonstrated exceptional capabilities in image
and video generation, further improving the effectiveness of 4D synthesis. Existing 4D …

Autoregressive Models in Vision: A Survey

J **ong, G Liu, L Huang, C Wu, T Wu, Y Mu… - arxiv preprint arxiv …, 2024 - arxiv.org
Autoregressive modeling has been a huge success in the field of natural language
processing (NLP). Recently, autoregressive models have emerged as a significant area of …

A Survey on Vision Autoregressive Model

K Jiang, J Huang - arxiv preprint arxiv:2411.08666, 2024 - arxiv.org
Autoregressive models have demonstrated great performance in natural language
processing (NLP) with impressive scalability, adaptability and generalizability. Inspired by …