Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Large language models are visual reasoning coordinators
Visual reasoning requires multimodal perception and commonsense cognition of the world.
Recently, multiple vision-language models (VLMs) have been proposed with excellent …
Recently, multiple vision-language models (VLMs) have been proposed with excellent …
Aloft: A lightweight mlp-like architecture with dynamic low-frequency transform for domain generalization
Abstract Domain generalization (DG) aims to learn a model that generalizes well to unseen
target domains utilizing multiple source domains without re-training. Most existing DG works …
target domains utilizing multiple source domains without re-training. Most existing DG works …
Robust mixture-of-expert training for convolutional neural networks
Abstract Sparsely-gated Mixture of Expert (MoE), an emerging deep model architecture, has
demonstrated a great promise to enable high-accuracy and ultra-efficient model inference …
demonstrated a great promise to enable high-accuracy and ultra-efficient model inference …
Fusemoe: Mixture-of-experts transformers for fleximodal fusion
As machine learning models in critical fields increasingly grapple with multimodal data, they
face the dual challenges of handling a wide array of modalities, often incomplete due to …
face the dual challenges of handling a wide array of modalities, often incomplete due to …
Knowledge distillation-based domain-invariant representation learning for domain generalization
Domain generalization (DG) aims to generalize the knowledge learned from multiple source
domains to unseen target domains. Existing DG techniques can be subsumed under two …
domains to unseen target domains. Existing DG techniques can be subsumed under two …
Dgmamba: Domain generalization via generalized state space model
Domain generalization (DG) aims at solving distribution shift problems in various scenes.
Existing approaches are based on Convolution Neural Networks (CNNs) or Vision …
Existing approaches are based on Convolution Neural Networks (CNNs) or Vision …
Graph mixture of experts: Learning on large-scale graphs with explicit diversity modeling
Graph neural networks (GNNs) have found extensive applications in learning from graph
data. However, real-world graphs often possess diverse structures and comprise nodes and …
data. However, real-world graphs often possess diverse structures and comprise nodes and …
Ca-moeit: Generalizable face anti-spoofing via dual cross-attention and semi-fixed mixture-of-expert
A Liu - International Journal of Computer Vision, 2024 - Springer
Although the generalization of face anti-spo-ofing (FAS) is increasingly concerned, it is still
in the initial stage to solve it based on Vision Transformer (ViT). In this paper, we present a …
in the initial stage to solve it based on Vision Transformer (ViT). In this paper, we present a …
Moe-ffd: Mixture of experts for generalized and parameter-efficient face forgery detection
Deepfakes have recently raised significant trust issues and security concerns among the
public. Compared to CNN face forgery detectors, ViT-based methods take advantage of the …
public. Compared to CNN face forgery detectors, ViT-based methods take advantage of the …
On least square estimation in softmax gating mixture of experts
Mixture of experts (MoE) model is a statistical machine learning design that aggregates
multiple expert networks using a softmax gating function in order to form a more intricate and …
multiple expert networks using a softmax gating function in order to form a more intricate and …