[HTML][HTML] Survey of deep learning accelerators for edge and emerging computing
The unprecedented progress in artificial intelligence (AI), particularly in deep learning
algorithms with ubiquitous internet connected smart devices, has created a high demand for …
algorithms with ubiquitous internet connected smart devices, has created a high demand for …
A 5-nm 254-TOPS/W 221-TOPS/mm2 Fully-Digital Computing-in-Memory Macro Supporting Wide-Range Dynamic-Voltage-Frequency Scaling and Simultaneous …
H Fujiwara, H Mori, WC Zhao… - … Solid-State Circuits …, 2022 - ieeexplore.ieee.org
Computing-in-memory (CIM) is being widely explored to minimize power consumption in
data movement and multiply-and-accumulate (MAC) for edge-AI devices. Although most …
data movement and multiply-and-accumulate (MAC) for edge-AI devices. Although most …
A 4nm 6163-TOPS/W/b SRAM Based Digital-Computing-in-Memory Macro Supporting Bit-Width Flexibility and Simultaneous MAC and Weight …
H Mori, WC Zhao, CE Lee, CF Lee… - … Solid-State Circuits …, 2023 - ieeexplore.ieee.org
The computational load, for accurate AI workloads, is moving from large server clusters to
edge devices; thus enabling richer and more personalized AI applications. Compute-in …
edge devices; thus enabling richer and more personalized AI applications. Compute-in …
An overview of computing-in-memory circuits with DRAM and NVM
S Kim, HJ Yoo - IEEE Transactions on Circuits and Systems II …, 2023 - ieeexplore.ieee.org
Computing-in-memory (CIM) has emerged as an energy-efficient hardware solution for
machine learning and AI. While static random access memory (SRAM)-based CIM has been …
machine learning and AI. While static random access memory (SRAM)-based CIM has been …
Real-time decoding for fault-tolerant quantum computing: Progress, challenges and outlook
Quantum computing is poised to solve practically useful problems which are computationally
intractable for classical supercomputers. However, the current generation of quantum …
intractable for classical supercomputers. However, the current generation of quantum …
PIMCA: A programmable in-memory computing accelerator for energy-efficient DNN inference
This article presents a programmable in-memory computing accelerator (PIMCA) for low-
precision (1–2 b) deep neural network (DNN) inference. The custom 10T1C bitcell in the in …
precision (1–2 b) deep neural network (DNN) inference. The custom 10T1C bitcell in the in …
Digital versus analog artificial intelligence accelerators: Advances, trends, and emerging designs
For state-of-the-art artificial intelligence (AI) accelerators, there have been large advances in
both all-digital and analog/mixed-signal circuit-based designs. This article presents a …
both all-digital and analog/mixed-signal circuit-based designs. This article presents a …
A 95.6-TOPS/W deep learning inference accelerator with per-vector scaled 4-bit quantization in 5 nm
The energy efficiency of deep neural network (DNN) inference can be improved with custom
accelerators. DNN inference accelerators often employ specialized hardware techniques to …
accelerators. DNN inference accelerators often employ specialized hardware techniques to …
16.5 DynaPlasia: An eDRAM in-memory-computing-based reconfigurable spatial accelerator with triple-mode cell for dynamic resource switching
In-memory computing (IMC) processors show significant energy and area efficiency for deep
neural network (DNN) processing [1–3]. As shown in Fig. 16.5. 1, despite promising macro …
neural network (DNN) processing [1–3]. As shown in Fig. 16.5. 1, despite promising macro …
A 28 nm 16 kb bit-scalable charge-domain transpose 6T SRAM in-memory computing macro
This article presents a compact, robust, and transposable SRAM in-memory computing
(IMC) macro to support feed forward (FF) and back propagation (BP) computation within a …
(IMC) macro to support feed forward (FF) and back propagation (BP) computation within a …