The evolution of distributed systems for graph neural networks and their origin in graph processing and deep learning: A survey
Graph neural networks (GNNs) are an emerging research field. This specialized deep
neural network architecture is capable of processing graph structured data and bridges the …
neural network architecture is capable of processing graph structured data and bridges the …
FedAT: A high-performance and communication-efficient federated learning system with asynchronous tiers
Federated learning (FL) involves training a model over massive distributed devices, while
kee** the training data localized and private. This form of collaborative learning exposes …
kee** the training data localized and private. This form of collaborative learning exposes …
Big graphs: challenges and opportunities
W Fan - Proceedings of the VLDB Endowment, 2022 - dl.acm.org
Big data is typically characterized with 4V's: Volume, Velocity, Variety and Veracity. When it
comes to big graphs, these challenges become even more staggering. Each and every of …
comes to big graphs, these challenges become even more staggering. Each and every of …
Parallelizing sequential graph computations
This article presents GRAPE, a parallel GRAP h E ngine for graph computations. GRAPE
differs from prior systems in its ability to parallelize existing sequential graph algorithms as a …
differs from prior systems in its ability to parallelize existing sequential graph algorithms as a …
Application driven graph partitioning
Graph partitioning is crucial to parallel computations on large graphs. The choice of
partitioning strategies has strong impact on not only the performance of graph algorithms …
partitioning strategies has strong impact on not only the performance of graph algorithms …
SEP-graph: finding shortest execution paths for graph processing under a hybrid framework on GPU
In general, the performance of parallel graph processing is determined by three pairs of
critical parameters, namely synchronous or asynchronous execution mode (Sync or Async) …
critical parameters, namely synchronous or asynchronous execution mode (Sync or Async) …
Pisces: Efficient federated learning via guided asynchronous training
Federated learning (FL) is typically performed in a synchronous parallel manner, and the
involvement of a slow client delays the training progress. Current FL systems employ a …
involvement of a slow client delays the training progress. Current FL systems employ a …
Linking entities across relations and graphs
This article proposes a notion of parametric simulation to link entities across a relational
database 𝒟 and a graph G. Taking functions and thresholds for measuring vertex closeness …
database 𝒟 and a graph G. Taking functions and thresholds for measuring vertex closeness …
Capturing associations in graphs
This paper proposes a class of graph association rules, denoted by GARs, to specify
regularities between entities in graphs. A GAR is a combination of a graph pattern and a …
regularities between entities in graphs. A GAR is a combination of a graph pattern and a …
DRPS: efficient disk-resident parameter servers for distributed machine learning
Parameter server (PS) as the state-of-the-art distributed framework for large-scale iterative
machine learning tasks has been extensively studied. However, existing PS-based systems …
machine learning tasks has been extensively studied. However, existing PS-based systems …