BP-MoE: Behavior Pattern-aware Mixture-of-Experts for temporal graph representation learning

C Chen, F Cai, W Chen, J Zheng, X Zhang… - Knowledge-Based …, 2024 - Elsevier
Temporal graph representation learning aims to develop low-dimensional embeddings for
nodes in a graph that can effectively capture their structural and temporal properties. Prior …

Efficient Detection of k-Plex Structures in Large Graphs Through Constraint Learning

HJ Hung, CH Lu, YY Huang, MY Chang… - IEEE Transactions …, 2024 - ieeexplore.ieee.org
The k-plex is a popular definition of communities in networks, offering more flexibility than
cliques by allowing each node to miss up to k connections. However, finding k-plexes in …

Graph Mixture of Experts and Memory-augmented Routers for Multivariate Time Series Anomaly Detection

X Huang, W Chen, B Hu, Z Mao - arxiv preprint arxiv:2412.19108, 2024 - arxiv.org
Multivariate time series (MTS) anomaly detection is a critical task that involves identifying
abnormal patterns or events in data that consist of multiple interrelated time series. In order …

GraphMoRE: Mitigating Topological Heterogeneity via Mixture of Riemannian Experts

Z Guo, Q Sun, H Yuan, X Fu, M Zhou, Y Gao… - arxiv preprint arxiv …, 2024 - arxiv.org
Real-world graphs have inherently complex and diverse topological patterns, known as
topological heterogeneity. Most existing works learn graph representation in a single …

TopER: Topological Embeddings in Graph Representation Learning

A Tola, FM Taiwo, CG Akcora, B Coskunuzer - arxiv preprint arxiv …, 2024 - arxiv.org
Graph embeddings play a critical role in graph representation learning, allowing machine
learning models to explore and interpret graph-structured data. However, existing methods …

DA-MoE: Addressing Depth-Sensitivity in Graph-Level Analysis through Mixture of Experts

Z Yao, C Liu, X Meng, Y Zhan, J Wu, S Pan… - arxiv preprint arxiv …, 2024 - arxiv.org
Graph neural networks (GNNs) are gaining popularity for processing graph-structured data.
In real-world scenarios, graph data within the same dataset can vary significantly in scale …

EPT-MoE: Toward Efficient Parallel Transformers with Mixture-of-Experts for 3D Hand Gesture Recognition

A Alboody, R Slama - The 10th World Congress on Electrical …, 2024 - hal.science
The Mixture-of-Experts (MoE) is a widely known deep neural architecture where an
ensemble of specialized sub-models (a group of experts) optimizes the overall performance …

Mixture of Link Predictors

L Ma, H Han, J Li, H Shomer, H Liu, X Gao… - arxiv preprint arxiv …, 2024 - arxiv.org
Link prediction, which aims to forecast unseen connections in graphs, is a fundamental task
in graph machine learning. Heuristic methods, leveraging a range of different pairwise …

Merging Mixture of Experts and Retrieval Augmented Generation for Enhanced Information Retrieval and Reasoning

X **ong, M Zheng - 2024 - researchsquare.com
This study investigates the integration of Retrieval Augmented Generation (RAG) into the
Mistral 8x7B Large Language Model (LLM), which already uses Mixture of Experts (MoE), to …

MEGA: Multi-encoder GNN Architecture for Stronger Task Collaboration and Generalization

F Khoshbakhtian, G Oberoi, D Aleman… - … European Conference on …, 2024 - Springer
Self-supervised learning in graphs has emerged as a promising avenue for harnessing
unlabeled graph data, leveraging pretext tasks to generate informative node …