Compute-in-memory chips for deep learning: Recent trends and prospects

S Yu, H Jiang, S Huang, X Peng… - IEEE circuits and systems …, 2021 - ieeexplore.ieee.org
Compute-in-memory (CIM) is a new computing paradigm that addresses the memory-wall
problem in hardware accelerator design for deep learning. The input vector and weight …

Challenges and trends of SRAM-based computing-in-memory for AI edge devices

CJ Jhang, CX Xue, JM Hung… - IEEE Transactions on …, 2021 - ieeexplore.ieee.org
When applied to artificial intelligence edge devices, the conventionally von Neumann
computing architecture imposes numerous challenges (eg, improving the energy efficiency) …

16.4 An 89TOPS/W and 16.3TOPS/mm2 All-Digital SRAM-Based Full-Precision Compute-In Memory Macro in 22nm for Machine-Learning Edge Applications

YD Chih, PH Lee, H Fujiwara, YC Shih… - … Solid-State Circuits …, 2021 - ieeexplore.ieee.org
From the cloud to edge devices, artificial intelligence (AI) and machine learning (ML) are
widely used in many cognitive tasks, such as image classification and speech recognition. In …

XNOR-SRAM: In-memory computing SRAM macro for binary/ternary deep neural networks

S Yin, Z Jiang, JS Seo, M Seok - IEEE Journal of Solid-State …, 2020 - ieeexplore.ieee.org
We present XNOR-SRAM, a mixed-signal in-memory computing (IMC) SRAM macro that
computes ternary-XNOR-and-accumulate (XAC) operations in binary/ternary deep neural …

C3SRAM: An in-memory-computing SRAM macro based on robust capacitive coupling computing mechanism

Z Jiang, S Yin, JS Seo, M Seok - IEEE Journal of Solid-State …, 2020 - ieeexplore.ieee.org
This article presents C3SRAM, an in-memory-computing SRAM macro. The macro is an
SRAM module with the circuits embedded in bitcells and peripherals to perform hardware …

15.3 A 351TOPS/W and 372.4 GOPS compute-in-memory SRAM macro in 7nm FinFET CMOS for machine-learning applications

Q Dong, ME Sinangil, B Erbagci, D Sun… - … Solid-State Circuits …, 2020 - ieeexplore.ieee.org
Compute-in-memory (CIM) parallelizes multiply-and-average (MAV) computations and
reduces off-chip weight access to reduce energy consumption and latency, specifically for Al …

15.4 A 22nm 2Mb ReRAM compute-in-memory macro with 121-28TOPS/W for multibit MAC computing for tiny AI edge devices

CX Xue, TY Huang, JS Liu, TW Chang… - … Solid-State Circuits …, 2020 - ieeexplore.ieee.org
Nonvolatile computing-in-memory (nvCIM) can improve the latency (t AC) and energy-
efficiency (EF MAC) of tiny AI edge devices performing multiply-and-accumulate (MAC) …

Colonnade: A reconfigurable SRAM-based digital bit-serial compute-in-memory macro for processing neural networks

H Kim, T Yoo, TTH Kim, B Kim - IEEE Journal of Solid-State …, 2021 - ieeexplore.ieee.org
This article (Colonnade) presents a fully digital bit-serial compute-in-memory (CIM) macro.
The digital CIM macro is designed for processing neural networks with reconfigurable 1-16 …

16.3 A 28nm 384kb 6T-SRAM computation-in-memory macro with 8b precision for AI edge chips

JW Su, YC Chou, R Liu, TW Liu, PJ Lu… - … Solid-State Circuits …, 2021 - ieeexplore.ieee.org
Recent SRAM-based computation-in-memory (CIM) macros enable mid-to-high precision
multiply-and-accumulate (MAC) operations with improved energy efficiency using ultra …

15.5 A 28nm 64Kb 6T SRAM computing-in-memory macro with 8b MAC operation for AI edge chips

X Si, YN Tu, WH Huang, JW Su, PJ Lu… - … solid-state circuits …, 2020 - ieeexplore.ieee.org
Advanced AI edge chips require multibit input (IN), weight (W), and output (OUT) for CNN
multiply-and-accumulate (MAC) operations to achieve an inference accuracy that is …