Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
A full spectrum of computing-in-memory technologies
Computing in memory (CIM) could be used to overcome the von Neumann bottleneck and to
provide sustainable improvements in computing throughput and energy efficiency …
provide sustainable improvements in computing throughput and energy efficiency …
A 28-nm 64-kb 31.6-TFLOPS/W Digital-Domain Floating-Point-Computing-Unit and Double-Bit 6T-SRAM Computing-in-Memory Macro for Floating-Point CNNs
With the rapid advancement of artificial intelligence (AI), computing-in-memory (CIM)
structure is proposed to improve energy efficiency (EF). However, previous CIMs often rely …
structure is proposed to improve energy efficiency (EF). However, previous CIMs often rely …
H3d-transformer: A heterogeneous 3d (h3d) computing platform for transformer model acceleration on edge devices
Prior hardware accelerator designs primarily focused on single-chip solutions for 10 MB-
class computer vision models. The GB-class transformer models for natural language …
class computer vision models. The GB-class transformer models for natural language …
Designing circuits for AiMC based on non-volatile memories: A tutorial brief on trade-off and strategies for ADCs and DACs co-design
R Vignali, R Zurla, M Pasotti, PL Rolandi… - … on Circuits and …, 2023 - ieeexplore.ieee.org
Analog In-Memory Computing (AiMC) based on Non-Volatile Memories (NVM) is a
promising candidate to reduce latency and power consumption of neural network (NN) …
promising candidate to reduce latency and power consumption of neural network (NN) …
A floating-point 6T SRAM in-memory-compute macro using hybrid-domain structure for advanced AI edge chips
PC Wu, JW Su, LY Hong, JS Ren… - IEEE Journal of Solid …, 2023 - ieeexplore.ieee.org
Advanced artificial intelligence edge devices are expected to support floating-point (FP)
multiply and accumulation operations while ensuring high energy efficiency and high …
multiply and accumulation operations while ensuring high energy efficiency and high …
An 8b-precision 6T SRAM computing-in-memory macro using time-domain incremental accumulation for AI edge chips
PC Wu, JW Su, YL Chung, LY Hong… - IEEE Journal of Solid …, 2023 - ieeexplore.ieee.org
This article presents a novel static random access memory computing-in-memory (SRAM-
CIM) structure designed for high-precision multiply-and-accumulate (MAC) operations with …
CIM) structure designed for high-precision multiply-and-accumulate (MAC) operations with …
34.3 A 22nm 64kb Lightning-Like Hybrid Computing-in-Memory Macro with a Compressed Adder Tree and Analog-Storage Quantizers for Transformer and CNNs
A Guo, X Chen, F Dong, J Chen, Z Yuan… - … Solid-State Circuits …, 2024 - ieeexplore.ieee.org
SRAM-based computing-in-memory (CIM) has made significant progress in improving the
energy efficiency (EF) of neural operators, specifically MAC, used in AI applications. Prior …
energy efficiency (EF) of neural operators, specifically MAC, used in AI applications. Prior …
A nonvolatile AI-edge processor with SLC–MLC hybrid ReRAM compute-in-memory macro using current–voltage-hybrid readout scheme
On-chip non-volatile compute-in-memory (nvCIM) enables artificial intelligence (AI)-edge
processors to perform multiply-and-accumulate (MAC) operations while enabling the non …
processors to perform multiply-and-accumulate (MAC) operations while enabling the non …
34.2 A 16nm 96Kb Integer/Floating-Point Dual-Mode-Gain-Cell-Computing-in-Memory Macro Achieving 73.3-163.3 TOPS/W and 33.2-91.2 TFLOPS/W for AI-Edge …
WS Khwa, PC Wu, JJ Wu, JW Su… - … Solid-State Circuits …, 2024 - ieeexplore.ieee.org
Advanced AI-edge chips require computational flexibility and high-energy efficiency (EEF)
with sufficient inference accuracy for a variety of applications. Floating-point (FP) numerical …
with sufficient inference accuracy for a variety of applications. Floating-point (FP) numerical …
PICO-RAM: A PVT-Insensitive Analog Compute-In-Memory SRAM Macro With In Situ Multi-Bit Charge Computing and 6T Thin-Cell-Compatible Layout
Analog compute-in-memory (CIM) in static random access memory (SRAM) is promising for
accelerating deep learning inference by circumventing the memory wall and exploiting ultra …
accelerating deep learning inference by circumventing the memory wall and exploiting ultra …