Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
The evolution of distributed systems for graph neural networks and their origin in graph processing and deep learning: A survey
Graph neural networks (GNNs) are an emerging research field. This specialized deep
neural network architecture is capable of processing graph structured data and bridges the …
neural network architecture is capable of processing graph structured data and bridges the …
Computing graph neural networks: A survey from algorithms to accelerators
Graph Neural Networks (GNNs) have exploded onto the machine learning scene in recent
years owing to their capability to model and learn from graph-structured data. Such an ability …
years owing to their capability to model and learn from graph-structured data. Such an ability …
GossipFL: A decentralized federated learning framework with sparsified and adaptive communication
Recently, federated learning (FL) techniques have enabled multiple users to train machine
learning models collaboratively without data sharing. However, existing FL algorithms suffer …
learning models collaboratively without data sharing. However, existing FL algorithms suffer …
[HTML][HTML] Privacy and security in federated learning: A survey
In recent years, privacy concerns have become a serious issue for companies wishing to
protect economic models and comply with end-user expectations. In the same vein, some …
protect economic models and comply with end-user expectations. In the same vein, some …
{MAST}: Global scheduling of {ML} training across {Geo-Distributed} datacenters at hyperscale
A Choudhury, Y Wang, T Pelkonen… - … USENIX Symposium on …, 2024 - usenix.org
In public clouds, users must manually select a datacenter region to upload their ML training
data and launch ML training workloads in the same region to ensure data and computation …
data and launch ML training workloads in the same region to ensure data and computation …
{nnScaler}:{Constraint-Guided} Parallelization Plan Generation for Deep Learning Training
With the growing model size of deep neural networks (DNN), deep learning training is
increasingly relying on handcrafted search spaces to find efficient parallelization execution …
increasingly relying on handcrafted search spaces to find efficient parallelization execution …
Chimera: efficiently training large-scale neural networks with bidirectional pipelines
Training large deep learning models at scale is very challenging. This paper proposes
Chimera, a novel pipeline parallelism scheme which combines bidirectional pipelines for …
Chimera, a novel pipeline parallelism scheme which combines bidirectional pipelines for …
A survey of on-device machine learning: An algorithms and learning theory perspective
The predominant paradigm for using machine learning models on a device is to train a
model in the cloud and perform inference using the trained model on the device. However …
model in the cloud and perform inference using the trained model on the device. However …