The evolution of distributed systems for graph neural networks and their origin in graph processing and deep learning: A survey

J Vatter, R Mayer, HA Jacobsen - ACM Computing Surveys, 2023 - dl.acm.org
Graph neural networks (GNNs) are an emerging research field. This specialized deep
neural network architecture is capable of processing graph structured data and bridges the …

FedAT: A high-performance and communication-efficient federated learning system with asynchronous tiers

Z Chai, Y Chen, A Anwar, L Zhao, Y Cheng… - Proceedings of the …, 2021 - dl.acm.org
Federated learning (FL) involves training a model over massive distributed devices, while
kee** the training data localized and private. This form of collaborative learning exposes …

Big graphs: challenges and opportunities

W Fan - Proceedings of the VLDB Endowment, 2022 - dl.acm.org
Big data is typically characterized with 4V's: Volume, Velocity, Variety and Veracity. When it
comes to big graphs, these challenges become even more staggering. Each and every of …

Parallelizing sequential graph computations

W Fan, W Yu, J Xu, J Zhou, X Luo, Q Yin, P Lu… - ACM Transactions on …, 2018 - dl.acm.org
This article presents GRAPE, a parallel GRAP h E ngine for graph computations. GRAPE
differs from prior systems in its ability to parallelize existing sequential graph algorithms as a …

Application driven graph partitioning

W Fan, R **, M Liu, P Lu, X Luo, R Xu, Q Yin… - Proceedings of the …, 2020 - dl.acm.org
Graph partitioning is crucial to parallel computations on large graphs. The choice of
partitioning strategies has strong impact on not only the performance of graph algorithms …

SEP-graph: finding shortest execution paths for graph processing under a hybrid framework on GPU

H Wang, L Geng, R Lee, K Hou, Y Zhang… - Proceedings of the 24th …, 2019 - dl.acm.org
In general, the performance of parallel graph processing is determined by three pairs of
critical parameters, namely synchronous or asynchronous execution mode (Sync or Async) …

Pisces: Efficient federated learning via guided asynchronous training

Z Jiang, W Wang, B Li, B Li - Proceedings of the 13th Symposium on …, 2022 - dl.acm.org
Federated learning (FL) is typically performed in a synchronous parallel manner, and the
involvement of a slow client delays the training progress. Current FL systems employ a …

Linking entities across relations and graphs

W Fan, P Lu, K Pang, R **, W Yu - ACM Transactions on Database …, 2024 - dl.acm.org
This article proposes a notion of parametric simulation to link entities across a relational
database 𝒟 and a graph G. Taking functions and thresholds for measuring vertex closeness …

Capturing associations in graphs

W Fan, R **, M Liu, P Lu, C Tian, J Zhou - Proceedings of the VLDB …, 2020 - dl.acm.org
This paper proposes a class of graph association rules, denoted by GARs, to specify
regularities between entities in graphs. A GAR is a combination of a graph pattern and a …

DRPS: efficient disk-resident parameter servers for distributed machine learning

Z Song, Y Gu, Z Wang, G Yu - Frontiers of Computer Science, 2022 - Springer
Parameter server (PS) as the state-of-the-art distributed framework for large-scale iterative
machine learning tasks has been extensively studied. However, existing PS-based systems …