Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Compute-in-memory chips for deep learning: Recent trends and prospects
Compute-in-memory (CIM) is a new computing paradigm that addresses the memory-wall
problem in hardware accelerator design for deep learning. The input vector and weight …
problem in hardware accelerator design for deep learning. The input vector and weight …
A survey of SRAM-based in-memory computing techniques and applications
As von Neumann computing architectures become increasingly constrained by data-
movement overheads, researchers have started exploring in-memory computing (IMC) …
movement overheads, researchers have started exploring in-memory computing (IMC) …
A charge domain SRAM compute-in-memory macro with C-2C ladder-based 8-bit MAC unit in 22-nm FinFET process for edge inference
Compute-in-memory (CiM) is one promising solution to address the memory bottleneck
existing in traditional computing architectures. However, the tradeoff between energy …
existing in traditional computing architectures. However, the tradeoff between energy …
HERMES-Core—A 1.59-TOPS/mm2 PCM on 14-nm CMOS In-Memory Compute Core Using 300-ps/LSB Linearized CCO-Based ADCs
We present a 256 256 in-memory compute (IMC) core designed and fabricated in 14-nm
CMOS technology with backend-integrated multi-level phase change memory (PCM). It …
CMOS technology with backend-integrated multi-level phase change memory (PCM). It …
15.2 A 2.75-to-75.9 TOPS/W computing-in-memory NN processor supporting set-associate block-wise zero skip** and **-pong CIM with simultaneous …
Computing-in-memory (CIM) is an attractive approach for energy-efficient neural network
(NN) processors, especially for low-power edge devices. Previous CIM chips have …
(NN) processors, especially for low-power edge devices. Previous CIM chips have …
Scalable and programmable neural network inference accelerator based on in-memory computing
This work demonstrates a programmable in-memory-computing (IMC) inference accelerator
for scalable execution of neural network (NN) models, leveraging a high-signal-to-noise …
for scalable execution of neural network (NN) models, leveraging a high-signal-to-noise …
CAP-RAM: A charge-domain in-memory computing 6T-SRAM for accurate and precision-programmable CNN inference
A compact, accurate, and bitwidth-programmable in-memory computing (IMC) static random-
access memory (SRAM) macro, named CAP-RAM, is presented for energy-efficient …
access memory (SRAM) macro, named CAP-RAM, is presented for energy-efficient …
A 65-nm 8T SRAM compute-in-memory macro with column ADCs for processing neural networks
In this work, we present a novel 8T static random access memory (SRAM)-based compute-in-
memory (CIM) macro for processing neural networks with high energy efficiency. The …
memory (CIM) macro for processing neural networks with high energy efficiency. The …
A local computing cell and 6T SRAM-based computing-in-memory macro with 8-b MAC operation for edge AI chips
This article presents a computing-in-memory (CIM) structure aimed at improving the energy
efficiency of edge devices running multi-bit multiply-and-accumulate (MAC) operations. The …
efficiency of edge devices running multi-bit multiply-and-accumulate (MAC) operations. The …
Mixed-signal computing for deep neural network inference
Modern deep neural networks (DNNs) require billions of multiply-accumulate operations per
inference. Given that these computations demand relatively low precision, it is feasible to …
inference. Given that these computations demand relatively low precision, it is feasible to …