[HTML][HTML] A survey on hardware accelerators: Taxonomy, trends, challenges, and perspectives

B Peccerillo, M Mannino, A Mondelli… - Journal of Systems …, 2022 - Elsevier
In recent years, the limits of the multicore approach emerged in the so-called “dark silicon”
issue and diminishing returns of an ever-increasing core count. Hardware manufacturers …

[HTML][HTML] Resistive-RAM-based in-memory computing for neural network: A review

W Chen, Z Qi, Z Akhtar, K Siddique - Electronics, 2022 - mdpi.com
Processing-in-memory (PIM) is a promising architecture to design various types of neural
network accelerators as it ensures the efficiency of computation together with Resistive …

[Књига][B] Efficient processing of deep neural networks

V Sze, YH Chen, TJ Yang, JS Emer - 2020 - Springer
This book provides a structured treatment of the key principles and techniques for enabling
efficient processing of deep neural networks (DNNs). DNNs are currently widely used for …

RAELLA: Reforming the arithmetic for efficient, low-resolution, and low-loss analog PIM: No retraining required!

T Andrulis, JS Emer, V Sze - … of the 50th Annual International Symposium …, 2023 - dl.acm.org
Processing-In-Memory (PIM) accelerators have the potential to efficiently run Deep Neural
Network (DNN) inference by reducing costly data movement and by using resistive RAM …

Accelerating graph convolutional networks using crossbar-based processing-in-memory architectures

Y Huang, L Zheng, P Yao, Q Wang… - … Symposium on High …, 2022 - ieeexplore.ieee.org
Graph convolutional networks (GCNs) are promising to enable machine learning on graphs.
GCNs exhibit mixed computational kernels, involving regular neural-network-like computing …

Timely: Pushing data movements and interfaces in pim accelerators towards local and in time domain

W Li, P Xu, Y Zhao, H Li, Y **e… - 2020 ACM/IEEE 47th …, 2020 - ieeexplore.ieee.org
Resistive-random-access-memory (ReRAM) based processing-in-memory (R2PIM)
accelerators show promise in bridging the gap between Internet of Thing devices' …

Advancements in accelerating deep neural network inference on AIoT devices: A survey

L Cheng, Y Gu, Q Liu, L Yang, C Liu… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
The amalgamation of artificial intelligence with Internet of Things (AIoT) devices have seen a
rapid surge in growth, largely due to the effective implementation of deep neural network …

On the accuracy of analog neural network inference accelerators

TP **ao, B Feinberg, CH Bennett… - IEEE Circuits and …, 2022 - ieeexplore.ieee.org
Specialized accelerators have recently garnered attention as a method to reduce the power
consumption of neural network inference. A promising category of accelerators utilizes …

RACER: Bit-pipelined processing using resistive memory

MSQ Truong, E Chen, D Su, L Shen, A Glass… - MICRO-54: 54th Annual …, 2021 - dl.acm.org
To combat the high energy costs of moving data between main memory and the CPU, recent
works have proposed to perform processing-using-memory (PUM), a type of processing-in …

CiMLoop: A flexible, accurate, and fast compute-in-memory modeling tool

T Andrulis, JS Emer, V Sze - 2024 IEEE International …, 2024 - ieeexplore.ieee.org
Compute-In-Memory (CiM) is a promising solution to accelerate Deep Neural Networks
(DNNs) as it can avoid energy-intensive DNN weight movement and use memory arrays to …