BP-MoE: Behavior Pattern-aware Mixture-of-Experts for temporal graph representation learning
Temporal graph representation learning aims to develop low-dimensional embeddings for
nodes in a graph that can effectively capture their structural and temporal properties. Prior …
nodes in a graph that can effectively capture their structural and temporal properties. Prior …
Efficient Detection of k-Plex Structures in Large Graphs Through Constraint Learning
HJ Hung, CH Lu, YY Huang, MY Chang… - IEEE Transactions …, 2024 - ieeexplore.ieee.org
The k-plex is a popular definition of communities in networks, offering more flexibility than
cliques by allowing each node to miss up to k connections. However, finding k-plexes in …
cliques by allowing each node to miss up to k connections. However, finding k-plexes in …
Graph Mixture of Experts and Memory-augmented Routers for Multivariate Time Series Anomaly Detection
Multivariate time series (MTS) anomaly detection is a critical task that involves identifying
abnormal patterns or events in data that consist of multiple interrelated time series. In order …
abnormal patterns or events in data that consist of multiple interrelated time series. In order …
GraphMoRE: Mitigating Topological Heterogeneity via Mixture of Riemannian Experts
Real-world graphs have inherently complex and diverse topological patterns, known as
topological heterogeneity. Most existing works learn graph representation in a single …
topological heterogeneity. Most existing works learn graph representation in a single …
TopER: Topological Embeddings in Graph Representation Learning
Graph embeddings play a critical role in graph representation learning, allowing machine
learning models to explore and interpret graph-structured data. However, existing methods …
learning models to explore and interpret graph-structured data. However, existing methods …
DA-MoE: Addressing Depth-Sensitivity in Graph-Level Analysis through Mixture of Experts
Graph neural networks (GNNs) are gaining popularity for processing graph-structured data.
In real-world scenarios, graph data within the same dataset can vary significantly in scale …
In real-world scenarios, graph data within the same dataset can vary significantly in scale …
EPT-MoE: Toward Efficient Parallel Transformers with Mixture-of-Experts for 3D Hand Gesture Recognition
The Mixture-of-Experts (MoE) is a widely known deep neural architecture where an
ensemble of specialized sub-models (a group of experts) optimizes the overall performance …
ensemble of specialized sub-models (a group of experts) optimizes the overall performance …
Mixture of Link Predictors
Link prediction, which aims to forecast unseen connections in graphs, is a fundamental task
in graph machine learning. Heuristic methods, leveraging a range of different pairwise …
in graph machine learning. Heuristic methods, leveraging a range of different pairwise …
Merging Mixture of Experts and Retrieval Augmented Generation for Enhanced Information Retrieval and Reasoning
X **ong, M Zheng - 2024 - researchsquare.com
This study investigates the integration of Retrieval Augmented Generation (RAG) into the
Mistral 8x7B Large Language Model (LLM), which already uses Mixture of Experts (MoE), to …
Mistral 8x7B Large Language Model (LLM), which already uses Mixture of Experts (MoE), to …
MEGA: Multi-encoder GNN Architecture for Stronger Task Collaboration and Generalization
Self-supervised learning in graphs has emerged as a promising avenue for harnessing
unlabeled graph data, leveraging pretext tasks to generate informative node …
unlabeled graph data, leveraging pretext tasks to generate informative node …