{MLaaS} in the wild: Workload analysis and scheduling in {Large-Scale} heterogeneous {GPU} clusters
With the sustained technological advances in machine learning (ML) and the availability of
massive datasets recently, tech companies are deploying large ML-as-a-Service (MLaaS) …
massive datasets recently, tech companies are deploying large ML-as-a-Service (MLaaS) …
Deep learning workload scheduling in gpu datacenters: A survey
Deep learning (DL) has demonstrated its remarkable success in a wide variety of fields. The
development of a DL model is a time-consuming and resource-intensive procedure. Hence …
development of a DL model is a time-consuming and resource-intensive procedure. Hence …
A survey on scheduling techniques in computing and network convergence
S Tang, Y Yu, H Wang, G Wang, W Chen… - … Surveys & Tutorials, 2023 - ieeexplore.ieee.org
The computing demand for massive applications has led to the ubiquitous deployment of
computing power. This trend results in the urgent need for higher-level computing resource …
computing power. This trend results in the urgent need for higher-level computing resource …
Gandiva: Introspective cluster scheduling for deep learning
We introduce Gandiva, a new cluster scheduling framework that utilizes domain-specific
knowledge to improve latency and efficiency of training deep learning models in a GPU …
knowledge to improve latency and efficiency of training deep learning models in a GPU …
Tiresias: A {GPU} cluster manager for distributed deep learning
Deep learning (DL) training jobs bring some unique challenges to existing cluster
managers, such as unpredictable training times, an all-or-nothing execution model, and …
managers, such as unpredictable training times, an all-or-nothing execution model, and …
[PDF][PDF] MAST: Global scheduling of ML training across Geo-Distributed datacenters at hyperscale
A Choudhury, Y Wang, T Pelkonen… - 18th USENIX …, 2024 - yangwang83.github.io
In public clouds, users must manually select a datacenter region to upload their ML training
data and launch ML training workloads in the same region to ensure data and computation …
data and launch ML training workloads in the same region to ensure data and computation …
{HiveD}: Sharing a {GPU} cluster for deep learning with guarantees
Deep learning training on a shared GPU cluster is becoming a common practice. However,
we observe severe sharing anomaly in production multi-tenant clusters where jobs in some …
we observe severe sharing anomaly in production multi-tenant clusters where jobs in some …
Deep learning workload scheduling in gpu datacenters: Taxonomy, challenges and vision
Deep learning (DL) shows its prosperity in a wide variety of fields. The development of a DL
model is a time-consuming and resource-intensive procedure. Hence, dedicated GPU …
model is a time-consuming and resource-intensive procedure. Hence, dedicated GPU …
Comet: a novel memory-efficient deep learning training framework by using error-bounded lossy compression
Training wide and deep neural networks (DNNs) require large amounts of storage resources
such as memory because the intermediate activation data must be saved in the memory …
such as memory because the intermediate activation data must be saved in the memory …
Kube-knots: Resource harvesting through dynamic container orchestration in gpu-based datacenters
Compute heterogeneity is increasingly gaining prominence in modern datacenters due to
the addition of accelerators like GPUs and FPGAs. We observe that datacenter schedulers …
the addition of accelerators like GPUs and FPGAs. We observe that datacenter schedulers …