Mashup: making serverless computing useful for hpc workflows via hybrid execution
This work introduces Mashup, a novel strategy to leverage serverless computing model for
executing scientific workflows in a hybrid fashion by taking advantage of both the traditional …
executing scientific workflows in a hybrid fashion by taking advantage of both the traditional …
Daydream: Executing dynamic scientific workflows on serverless platforms with hot starts
HPC applications are increasingly being designed as dynamic workflows for the ease of
development and scaling. This work demonstrates how the serverless computing model can …
development and scaling. This work demonstrates how the serverless computing model can …
DFMan: A graph-based optimization of dataflow scheduling on high-performance computing systems
Scientific research and development campaigns are materialized by workflows of
applications executing on high-performance computing (HPC) systems. These applications …
applications executing on high-performance computing (HPC) systems. These applications …
HDF5 Cache VOL: Efficient and scalable parallel I/O through caching data on node-local storage
Modern-era high performance computing (HPC) systems are providing multiple levels of
memory and storage layers to bridge the performance gap between fast memory and slow …
memory and storage layers to bridge the performance gap between fast memory and slow …
Hcompress: Hierarchical data compression for multi-tiered storage environments
Modern scientific applications read and write massive amounts of data through simulations,
observations, and analysis. These applications spend the majority of their runtime in …
observations, and analysis. These applications spend the majority of their runtime in …
Extracting and characterizing I/O behavior of HPC workloads
System administrators set default storage-system configuration parameters with the goal of
providing high per-formance for their system's I/O workloads. However, this gener-alized …
providing high per-formance for their system's I/O workloads. However, this gener-alized …
I/O acceleration via multi-tiered data buffering and prefetching
Abstract Modern High-Performance Computing (HPC) systems are adding extra layers to the
memory and storage hierarchy, named deep memory and storage hierarchy (DMSH), to …
memory and storage hierarchy, named deep memory and storage hierarchy (DMSH), to …
Hfetch: Hierarchical data prefetching for scientific workflows in multi-tiered storage environments
In the era of data-intensive computing, accessing data with a high-throughput and low-
latency is more imperative than ever. Data prefetching is a well-known technique for hiding …
latency is more imperative than ever. Data prefetching is a well-known technique for hiding …
Storage-heterogeneity aware task-based programming models to optimize I/O intensive applications
Task-based programming models have enabled the optimized execution of the computation
workloads of applications. These programming models can take advantage of large-scale …
workloads of applications. These programming models can take advantage of large-scale …
DaYu: Optimizing Distributed Scientific Workflows by Decoding Dataflow Semantics and Dynamics
The combination of ever-growing scientific datasets and distributed workflow complexity
creates I/O performance bottlenecks due to data volume, velocity, and variety. Although the …
creates I/O performance bottlenecks due to data volume, velocity, and variety. Although the …