Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
A survey on hallucination in large vision-language models
Recent development of Large Vision-Language Models (LVLMs) has attracted growing
attention within the AI landscape for its practical implementation potential. However,`` …
attention within the AI landscape for its practical implementation potential. However,`` …
Explainable artificial intelligence for autonomous driving: A comprehensive overview and field guide for future research directions
S Atakishiyev, M Salameh, H Yao, R Goebel - IEEE Access, 2024 - ieeexplore.ieee.org
Autonomous driving has achieved significant milestones in research and development over
the last two decades. There is increasing interest in the field as the deployment of …
the last two decades. There is increasing interest in the field as the deployment of …
Gemini: a family of highly capable multimodal models
G Team, R Anil, S Borgeaud, JB Alayrac, J Yu… - arxiv preprint arxiv …, 2023 - arxiv.org
This report introduces a new family of multimodal models, Gemini, that exhibit remarkable
capabilities across image, audio, video, and text understanding. The Gemini family consists …
capabilities across image, audio, video, and text understanding. The Gemini family consists …
The llama 3 herd of models
Modern artificial intelligence (AI) systems are powered by foundation models. This paper
presents a new set of foundation models, called Llama 3. It is a herd of language models …
presents a new set of foundation models, called Llama 3. It is a herd of language models …
Next-gpt: Any-to-any multimodal llm
S Wu, H Fei, L Qu, W Ji, TS Chua - Forty-first International …, 2024 - openreview.net
While recently Multimodal Large Language Models (MM-LLMs) have made exciting strides,
they mostly fall prey to the limitation of only input-side multimodal understanding, without the …
they mostly fall prey to the limitation of only input-side multimodal understanding, without the …
Multimodal foundation models: From specialists to general-purpose assistants
Neural compression is the application of neural networks and other machine learning
methods to data compression. Recent advances in statistical machine learning have opened …
methods to data compression. Recent advances in statistical machine learning have opened …
Llava-onevision: Easy visual task transfer
We present LLaVA-OneVision, a family of open large multimodal models (LMMs) developed
by consolidating our insights into data, models, and visual representations in the LLaVA …
by consolidating our insights into data, models, and visual representations in the LLaVA …
Chat-univi: Unified visual representation empowers large language models with image and video understanding
Large language models have demonstrated impressive universal capabilities across a wide
range of open-ended tasks and have extended their utility to encompass multimodal …
range of open-ended tasks and have extended their utility to encompass multimodal …
Languagebind: Extending video-language pretraining to n-modality by language-based semantic alignment
The video-language (VL) pretraining has achieved remarkable improvement in multiple
downstream tasks. However, the current VL pretraining framework is hard to extend to …
downstream tasks. However, the current VL pretraining framework is hard to extend to …
An image is worth 1/2 tokens after layer 2: Plug-and-play inference acceleration for large vision-language models
In this study, we identify the inefficient attention phenomena in Large Vision-Language
Models (LVLMs), notably within prominent models like LLaVA-1.5, QwenVL-Chat, and Video …
Models (LVLMs), notably within prominent models like LLaVA-1.5, QwenVL-Chat, and Video …