Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
A 28-nm 64-kb 31.6-TFLOPS/W Digital-Domain Floating-Point-Computing-Unit and Double-Bit 6T-SRAM Computing-in-Memory Macro for Floating-Point CNNs
With the rapid advancement of artificial intelligence (AI), computing-in-memory (CIM)
structure is proposed to improve energy efficiency (EF). However, previous CIMs often rely …
structure is proposed to improve energy efficiency (EF). However, previous CIMs often rely …
A 65-nm 8T SRAM compute-in-memory macro with column ADCs for processing neural networks
In this work, we present a novel 8T static random access memory (SRAM)-based compute-in-
memory (CIM) macro for processing neural networks with high energy efficiency. The …
memory (CIM) macro for processing neural networks with high energy efficiency. The …
A 8-b-precision 6T SRAM computing-in-memory macro using segmented-bitline charge-sharing scheme for AI edge chips
Advances in static random access memory (SRAM)-CIM devices are meant to increase
capacity while improving energy efficiency (EF) and reducing computing latency (). This …
capacity while improving energy efficiency (EF) and reducing computing latency (). This …
In-memory computing based on phase change memory for high energy efficiency
L He, X Li, C **e, Z Song - Science China Information Sciences, 2023 - Springer
The energy efficiency issue caused by the memory wall in traditional von Neumann
architecture is difficult to reconcile. In-memory computing (CIM) based on emerging …
architecture is difficult to reconcile. In-memory computing (CIM) based on emerging …
A floating-point 6T SRAM in-memory-compute macro using hybrid-domain structure for advanced AI edge chips
PC Wu, JW Su, LY Hong, JS Ren… - IEEE Journal of Solid …, 2023 - ieeexplore.ieee.org
Advanced artificial intelligence edge devices are expected to support floating-point (FP)
multiply and accumulation operations while ensuring high energy efficiency and high …
multiply and accumulation operations while ensuring high energy efficiency and high …
An 8b-precision 6T SRAM computing-in-memory macro using time-domain incremental accumulation for AI edge chips
PC Wu, JW Su, YL Chung, LY Hong… - IEEE Journal of Solid …, 2023 - ieeexplore.ieee.org
This article presents a novel static random access memory computing-in-memory (SRAM-
CIM) structure designed for high-precision multiply-and-accumulate (MAC) operations with …
CIM) structure designed for high-precision multiply-and-accumulate (MAC) operations with …
A fully bit-flexible computation in memory macro using multi-functional computing bit cell and embedded input sparsity sensing
CY Yao, TY Wu, HC Liang, YK Chen… - IEEE Journal of Solid …, 2023 - ieeexplore.ieee.org
Computation in memory (CIM) overcomes the von Neumann bottleneck by minimizing the
communication overhead between memory and process elements. However, using …
communication overhead between memory and process elements. However, using …
An area-and energy-efficient spiking neural network with spike-time-dependent plasticity realized with SRAM processing-in-memory macro and on-chip unsupervised …
S Liu, JJ Wang, JT Zhou, SG Hu, Q Yu… - … Circuits and Systems, 2023 - ieeexplore.ieee.org
In this article, we present a spiking neural network (SNN) based on both SRAM processing-
in-memory (PIM) macro and on-chip unsupervised learning with Spike-Time-Dependent …
in-memory (PIM) macro and on-chip unsupervised learning with Spike-Time-Dependent …
In Situ Storing 8T SRAM-CIM Macro for Full-Array Boolean Logic and Copy Operations
Computing in-memory (CIM) is a promising new computing method to solve problems
caused by von Neumann bottlenecks. It mitigates the need for transmitting large amounts of …
caused by von Neumann bottlenecks. It mitigates the need for transmitting large amounts of …
TT@ CIM: A tensor-train in-memory-computing processor using bit-level-sparsity optimization and variable precision quantization
R Guo, Z Yue, X Si, H Li, T Hu, L Tang… - IEEE Journal of Solid …, 2022 - ieeexplore.ieee.org
Computing-in-memory (CIM) is an attractive approach for energy-efficient deep neural
network (DNN) processing, especially for low-power edge devices. However, today's typical …
network (DNN) processing, especially for low-power edge devices. However, today's typical …