Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Compute-in-memory chips for deep learning: Recent trends and prospects
Compute-in-memory (CIM) is a new computing paradigm that addresses the memory-wall
problem in hardware accelerator design for deep learning. The input vector and weight …
problem in hardware accelerator design for deep learning. The input vector and weight …
Challenges and trends of SRAM-based computing-in-memory for AI edge devices
CJ Jhang, CX Xue, JM Hung… - IEEE Transactions on …, 2021 - ieeexplore.ieee.org
When applied to artificial intelligence edge devices, the conventionally von Neumann
computing architecture imposes numerous challenges (eg, improving the energy efficiency) …
computing architecture imposes numerous challenges (eg, improving the energy efficiency) …
16.4 An 89TOPS/W and 16.3TOPS/mm2 All-Digital SRAM-Based Full-Precision Compute-In Memory Macro in 22nm for Machine-Learning Edge Applications
From the cloud to edge devices, artificial intelligence (AI) and machine learning (ML) are
widely used in many cognitive tasks, such as image classification and speech recognition. In …
widely used in many cognitive tasks, such as image classification and speech recognition. In …
XNOR-SRAM: In-memory computing SRAM macro for binary/ternary deep neural networks
We present XNOR-SRAM, a mixed-signal in-memory computing (IMC) SRAM macro that
computes ternary-XNOR-and-accumulate (XAC) operations in binary/ternary deep neural …
computes ternary-XNOR-and-accumulate (XAC) operations in binary/ternary deep neural …
C3SRAM: An in-memory-computing SRAM macro based on robust capacitive coupling computing mechanism
This article presents C3SRAM, an in-memory-computing SRAM macro. The macro is an
SRAM module with the circuits embedded in bitcells and peripherals to perform hardware …
SRAM module with the circuits embedded in bitcells and peripherals to perform hardware …
15.3 A 351TOPS/W and 372.4 GOPS compute-in-memory SRAM macro in 7nm FinFET CMOS for machine-learning applications
Compute-in-memory (CIM) parallelizes multiply-and-average (MAV) computations and
reduces off-chip weight access to reduce energy consumption and latency, specifically for Al …
reduces off-chip weight access to reduce energy consumption and latency, specifically for Al …
15.4 A 22nm 2Mb ReRAM compute-in-memory macro with 121-28TOPS/W for multibit MAC computing for tiny AI edge devices
CX Xue, TY Huang, JS Liu, TW Chang… - … Solid-State Circuits …, 2020 - ieeexplore.ieee.org
Nonvolatile computing-in-memory (nvCIM) can improve the latency (t AC) and energy-
efficiency (EF MAC) of tiny AI edge devices performing multiply-and-accumulate (MAC) …
efficiency (EF MAC) of tiny AI edge devices performing multiply-and-accumulate (MAC) …
Colonnade: A reconfigurable SRAM-based digital bit-serial compute-in-memory macro for processing neural networks
This article (Colonnade) presents a fully digital bit-serial compute-in-memory (CIM) macro.
The digital CIM macro is designed for processing neural networks with reconfigurable 1-16 …
The digital CIM macro is designed for processing neural networks with reconfigurable 1-16 …
16.3 A 28nm 384kb 6T-SRAM computation-in-memory macro with 8b precision for AI edge chips
Recent SRAM-based computation-in-memory (CIM) macros enable mid-to-high precision
multiply-and-accumulate (MAC) operations with improved energy efficiency using ultra …
multiply-and-accumulate (MAC) operations with improved energy efficiency using ultra …
15.5 A 28nm 64Kb 6T SRAM computing-in-memory macro with 8b MAC operation for AI edge chips
Advanced AI edge chips require multibit input (IN), weight (W), and output (OUT) for CNN
multiply-and-accumulate (MAC) operations to achieve an inference accuracy that is …
multiply-and-accumulate (MAC) operations to achieve an inference accuracy that is …