Smartsage: training large-scale graph neural networks using in-storage processing architectures

Y Lee, J Chung, M Rhu - Proceedings of the 49th Annual International …, 2022 - dl.acm.org
Graph neural networks (GNNs) can extract features by learning both the representation of
each objects (ie, graph nodes) and the relationship across different objects (ie, the edges …

Flash-cosmos: In-flash bulk bitwise operations using inherent computation capability of nand flash memory

J Park, R Azizi, GF Oliveira… - 2022 55th IEEE/ACM …, 2022 - ieeexplore.ieee.org
Bulk bitwise operations, ie, bitwise operations on large bit vectors, are prevalent in a wide
range of important application domains, including databases, graph processing, genome …

Hyperscale fpga-as-a-service architecture for large-scale distributed graph neural network

S Li, D Niu, Y Wang, W Han, Z Zhang, T Guan… - Proceedings of the 49th …, 2022 - dl.acm.org
Graph neural network (GNN) is a promising emerging application for link prediction,
recommendation, etc. Existing hardware innovation is limited to single-machine GNN (SM …

FlashGNN: An In-SSD Accelerator for GNN Training

F Niu, J Yue, J Shen, X Liao… - 2024 IEEE International …, 2024 - ieeexplore.ieee.org
Recently, Graph Neural Networks (GNNs) have emerged as powerful tools for data analysis,
surpassing traditional algorithms in various applications. However, the growing size of real …

PreSto: An In-Storage Data Preprocessing System for Training Recommendation Models

Y Lee, H Kim, M Rhu - 2024 ACM/IEEE 51st Annual …, 2024 - ieeexplore.ieee.org
Training recommendation systems (RecSys) faces several challenges as it requires the
“data preprocessing” stage to preprocess an ample amount of raw data and feed them to the …

Ginex: Ssd-enabled billion-scale graph neural network training on a single machine via provably optimal in-memory caching

Y Park, S Min, JW Lee - arxiv preprint arxiv:2208.09151, 2022 - arxiv.org
Recently, Graph Neural Networks (GNNs) have been receiving a spotlight as a powerful tool
that can effectively serve various inference tasks on graph structured data. As the size of real …

Beacongnn: Large-scale gnn acceleration with out-of-order streaming in-storage computing

Y Wang, X Pan, Y An, J Zhang… - 2024 IEEE International …, 2024 - ieeexplore.ieee.org
Prior in-storage computing (ISC) solutions show fundamental drawbacks when applied to
GNN acceleration. First, they obey a strict ordering of GNN neighbor sampling. Such …

MegIS: High-Performance, Energy-Efficient, and Low-Cost Metagenomic Analysis with In-Storage Processing

NM Ghiasi, M Sadrosadati, H Mustafa… - 2024 ACM/IEEE 51st …, 2024 - ieeexplore.ieee.org
Metagenomics, the study of the genome sequences of diverse organisms in a common
environment, has led to significant advances in many fields. Since the species present in a …

HGL: accelerating heterogeneous GNN training with holistic representation and optimization

Y Gui, Y Wu, H Yang, T **, B Li, Q Zhou… - … Conference for High …, 2022 - ieeexplore.ieee.org
Graph neural networks (GNNs) have shown to significantly improve graph analytics. Existing
systems for GNN training are primarily designed for homogeneous graphs. In industry …

Optimstore: In-storage optimization of large scale dnns with on-die processing

J Kim, M Kang, Y Han, YG Kim… - 2023 IEEE International …, 2023 - ieeexplore.ieee.org
Training deep neural network (DNN) models is a resource-intensive, iterative process. For
this reason, nowadays, complex optimizers like Adam are widely adopted as it increases the …