A charge domain SRAM compute-in-memory macro with C-2C ladder-based 8-bit MAC unit in 22-nm FinFET process for edge inference

H Wang, R Liu, R Dorrance… - IEEE Journal of Solid …, 2023 - ieeexplore.ieee.org
Compute-in-memory (CiM) is one promising solution to address the memory bottleneck
existing in traditional computing architectures. However, the tradeoff between energy …

A 28-nm 64-kb 31.6-TFLOPS/W Digital-Domain Floating-Point-Computing-Unit and Double-Bit 6T-SRAM Computing-in-Memory Macro for Floating-Point CNNs

A Guo, X Chen, F Dong, X Pu, D Li… - IEEE Journal of Solid …, 2024 - ieeexplore.ieee.org
With the rapid advancement of artificial intelligence (AI), computing-in-memory (CIM)
structure is proposed to improve energy efficiency (EF). However, previous CIMs often rely …

Trending IC design directions in 2022

CH Chan, L Cheng, W Deng, P Feng… - Journal of …, 2022 - iopscience.iop.org
For the non-stop demands for a better and smarter society, the number of electronic devices
keeps increasing exponentially; and the computation power, communication data rate, smart …

From macro to microarchitecture: Reviews and trends of SRAM-based compute-in-memory circuits

Z Zhang, J Chen, X Chen, A Guo, B Wang… - Science China …, 2023 - Springer
The rapid growth of CMOS logic circuits has surpassed the advancements in memory
access, leading to significant “memory wall” bottlenecks, particularly in artificial intelligence …

A 22nm 832Kb hybrid-domain floating-point SRAM in-memory-compute macro with 16.2-70.2 TFLOPS/W for high-accuracy AI-edge devices

PC Wu, JW Su, LY Hong, JS Ren… - … Solid-State Circuits …, 2023 - ieeexplore.ieee.org
Advanced artificial-intelligence (Al) edge devices require high energy-efficiency (E) and high
inference-accuracy 2, 4-6. An SRAM-based compute-in-memory (CIM) based on MAC …

PIMCA: A programmable in-memory computing accelerator for energy-efficient DNN inference

B Zhang, S Yin, M Kim, J Saikia, S Kwon… - IEEE Journal of Solid …, 2022 - ieeexplore.ieee.org
This article presents a programmable in-memory computing accelerator (PIMCA) for low-
precision (1–2 b) deep neural network (DNN) inference. The custom 10T1C bitcell in the in …

ReDCIM: Reconfigurable digital computing-in-memory processor with unified FP/INT pipeline for cloud AI acceleration

F Tu, Y Wang, Z Wu, L Liang, Y Ding… - IEEE Journal of Solid …, 2022 - ieeexplore.ieee.org
Cloud AI acceleration has drawn great attention in recent years, as big models are
becoming a popular trend in deep learning. Cloud AI runs high-efficiency inference, high …

A 28nm 16.9-300TOPS/W computing-in-memory processor supporting floating-point NN inference/training with intensive-CIM sparse-digital architecture

J Yue, C He, Z Wang, Z Cong, Y He… - … Solid-State Circuits …, 2023 - ieeexplore.ieee.org
Computing-in-memory (CIM) has shown high energy efficiency on low-precision integer
multiply-accumulate (MAC)[1–3]. However, implementing floating-point (FP) operations …

TranCIM: Full-digital bitline-transpose CIM-based sparse transformer accelerator with pipeline/parallel reconfigurable modes

F Tu, Z Wu, Y Wang, L Liang, L Liu… - IEEE Journal of Solid …, 2022 - ieeexplore.ieee.org
Transformer models achieve excellent results in the fields like natural language processing,
computer vision, and bioinformatics. Their large numbers of matrix multiplications (MMs) …

A floating-point 6T SRAM in-memory-compute macro using hybrid-domain structure for advanced AI edge chips

PC Wu, JW Su, LY Hong, JS Ren… - IEEE Journal of Solid …, 2023 - ieeexplore.ieee.org
Advanced artificial intelligence edge devices are expected to support floating-point (FP)
multiply and accumulation operations while ensuring high energy efficiency and high …