Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Compute-in-memory chips for deep learning: Recent trends and prospects
Compute-in-memory (CIM) is a new computing paradigm that addresses the memory-wall
problem in hardware accelerator design for deep learning. The input vector and weight …
problem in hardware accelerator design for deep learning. The input vector and weight …
Challenges and trends of SRAM-based computing-in-memory for AI edge devices
CJ Jhang, CX Xue, JM Hung… - IEEE Transactions on …, 2021 - ieeexplore.ieee.org
When applied to artificial intelligence edge devices, the conventionally von Neumann
computing architecture imposes numerous challenges (eg, improving the energy efficiency) …
computing architecture imposes numerous challenges (eg, improving the energy efficiency) …
A compute-in-memory chip based on resistive random-access memory
Realizing increasingly complex artificial intelligence (AI) functionalities directly on edge
devices calls for unprecedented energy efficiency of edge hardware. Compute-in-memory …
devices calls for unprecedented energy efficiency of edge hardware. Compute-in-memory …
A charge domain SRAM compute-in-memory macro with C-2C ladder-based 8-bit MAC unit in 22-nm FinFET process for edge inference
Compute-in-memory (CiM) is one promising solution to address the memory bottleneck
existing in traditional computing architectures. However, the tradeoff between energy …
existing in traditional computing architectures. However, the tradeoff between energy …
Colonnade: A reconfigurable SRAM-based digital bit-serial compute-in-memory macro for processing neural networks
This article (Colonnade) presents a fully digital bit-serial compute-in-memory (CIM) macro.
The digital CIM macro is designed for processing neural networks with reconfigurable 1-16 …
The digital CIM macro is designed for processing neural networks with reconfigurable 1-16 …
16.3 A 28nm 384kb 6T-SRAM computation-in-memory macro with 8b precision for AI edge chips
Recent SRAM-based computation-in-memory (CIM) macros enable mid-to-high precision
multiply-and-accumulate (MAC) operations with improved energy efficiency using ultra …
multiply-and-accumulate (MAC) operations with improved energy efficiency using ultra …
15.2 A 2.75-to-75.9 TOPS/W computing-in-memory NN processor supporting set-associate block-wise zero skip** and **-pong CIM with simultaneous …
Computing-in-memory (CIM) is an attractive approach for energy-efficient neural network
(NN) processors, especially for low-power edge devices. Previous CIM chips have …
(NN) processors, especially for low-power edge devices. Previous CIM chips have …
A CMOS-integrated compute-in-memory macro based on resistive random-access memory for AI edge devices
CX Xue, YC Chiu, TW Liu, TY Huang, JS Liu… - Nature …, 2021 - nature.com
The development of small, energy-efficient artificial intelligence edge devices is limited in
conventional computing architectures by the need to transfer data between the processor …
conventional computing architectures by the need to transfer data between the processor …
CAP-RAM: A charge-domain in-memory computing 6T-SRAM for accurate and precision-programmable CNN inference
A compact, accurate, and bitwidth-programmable in-memory computing (IMC) static random-
access memory (SRAM) macro, named CAP-RAM, is presented for energy-efficient …
access memory (SRAM) macro, named CAP-RAM, is presented for energy-efficient …
A 65-nm 8T SRAM compute-in-memory macro with column ADCs for processing neural networks
In this work, we present a novel 8T static random access memory (SRAM)-based compute-in-
memory (CIM) macro for processing neural networks with high energy efficiency. The …
memory (CIM) macro for processing neural networks with high energy efficiency. The …