Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
A charge domain SRAM compute-in-memory macro with C-2C ladder-based 8-bit MAC unit in 22-nm FinFET process for edge inference
Compute-in-memory (CiM) is one promising solution to address the memory bottleneck
existing in traditional computing architectures. However, the tradeoff between energy …
existing in traditional computing architectures. However, the tradeoff between energy …
A 28-nm 64-kb 31.6-TFLOPS/W Digital-Domain Floating-Point-Computing-Unit and Double-Bit 6T-SRAM Computing-in-Memory Macro for Floating-Point CNNs
With the rapid advancement of artificial intelligence (AI), computing-in-memory (CIM)
structure is proposed to improve energy efficiency (EF). However, previous CIMs often rely …
structure is proposed to improve energy efficiency (EF). However, previous CIMs often rely …
Trending IC design directions in 2022
For the non-stop demands for a better and smarter society, the number of electronic devices
keeps increasing exponentially; and the computation power, communication data rate, smart …
keeps increasing exponentially; and the computation power, communication data rate, smart …
From macro to microarchitecture: Reviews and trends of SRAM-based compute-in-memory circuits
The rapid growth of CMOS logic circuits has surpassed the advancements in memory
access, leading to significant “memory wall” bottlenecks, particularly in artificial intelligence …
access, leading to significant “memory wall” bottlenecks, particularly in artificial intelligence …
A 22nm 832Kb hybrid-domain floating-point SRAM in-memory-compute macro with 16.2-70.2 TFLOPS/W for high-accuracy AI-edge devices
PC Wu, JW Su, LY Hong, JS Ren… - … Solid-State Circuits …, 2023 - ieeexplore.ieee.org
Advanced artificial-intelligence (Al) edge devices require high energy-efficiency (E) and high
inference-accuracy 2, 4-6. An SRAM-based compute-in-memory (CIM) based on MAC …
inference-accuracy 2, 4-6. An SRAM-based compute-in-memory (CIM) based on MAC …
PIMCA: A programmable in-memory computing accelerator for energy-efficient DNN inference
This article presents a programmable in-memory computing accelerator (PIMCA) for low-
precision (1–2 b) deep neural network (DNN) inference. The custom 10T1C bitcell in the in …
precision (1–2 b) deep neural network (DNN) inference. The custom 10T1C bitcell in the in …
ReDCIM: Reconfigurable digital computing-in-memory processor with unified FP/INT pipeline for cloud AI acceleration
Cloud AI acceleration has drawn great attention in recent years, as big models are
becoming a popular trend in deep learning. Cloud AI runs high-efficiency inference, high …
becoming a popular trend in deep learning. Cloud AI runs high-efficiency inference, high …
A 28nm 16.9-300TOPS/W computing-in-memory processor supporting floating-point NN inference/training with intensive-CIM sparse-digital architecture
Computing-in-memory (CIM) has shown high energy efficiency on low-precision integer
multiply-accumulate (MAC)[1–3]. However, implementing floating-point (FP) operations …
multiply-accumulate (MAC)[1–3]. However, implementing floating-point (FP) operations …
TranCIM: Full-digital bitline-transpose CIM-based sparse transformer accelerator with pipeline/parallel reconfigurable modes
Transformer models achieve excellent results in the fields like natural language processing,
computer vision, and bioinformatics. Their large numbers of matrix multiplications (MMs) …
computer vision, and bioinformatics. Their large numbers of matrix multiplications (MMs) …
A floating-point 6T SRAM in-memory-compute macro using hybrid-domain structure for advanced AI edge chips
PC Wu, JW Su, LY Hong, JS Ren… - IEEE Journal of Solid …, 2023 - ieeexplore.ieee.org
Advanced artificial intelligence edge devices are expected to support floating-point (FP)
multiply and accumulation operations while ensuring high energy efficiency and high …
multiply and accumulation operations while ensuring high energy efficiency and high …