Compute-in-memory chips for deep learning: Recent trends and prospects
Compute-in-memory (CIM) is a new computing paradigm that addresses the memory-wall
problem in hardware accelerator design for deep learning. The input vector and weight …
problem in hardware accelerator design for deep learning. The input vector and weight …
Spiking neural network integrated circuits: A review of trends and future directions
The rapid growth of deep learning, spurred by its successes in various fields ranging from
face recognition [1] to game playing [2], has also triggered a growing interest in the design of …
face recognition [1] to game playing [2], has also triggered a growing interest in the design of …
[HTML][HTML] A compute-in-memory chip based on resistive random-access memory
Realizing increasingly complex artificial intelligence (AI) functionalities directly on edge
devices calls for unprecedented energy efficiency of edge hardware. Compute-in-memory …
devices calls for unprecedented energy efficiency of edge hardware. Compute-in-memory …
Edge learning using a fully integrated neuro-inspired memristor chip
Learning is highly important for edge intelligence devices to adapt to different application
scenes and owners. Current technologies for training neural networks require moving …
scenes and owners. Current technologies for training neural networks require moving …
A memristor-based analogue reservoir computing system for real-time and power-efficient signal processing
Reservoir computing offers a powerful neuromorphic computing architecture for
spatiotemporal signal processing. To boost the power efficiency of the hardware …
spatiotemporal signal processing. To boost the power efficiency of the hardware …
2022 roadmap on neuromorphic computing and engineering
Modern computation based on von Neumann architecture is now a mature cutting-edge
science. In the von Neumann architecture, processing and memory units are implemented …
science. In the von Neumann architecture, processing and memory units are implemented …
A CMOS-integrated spintronic compute-in-memory macro for secure AI edge devices
YC Chiu, WS Khwa, CS Yang, SH Teng, HY Huang… - Nature …, 2023 - nature.com
Artificial intelligence edge devices should offer high inference accuracy and rapid response
times, as well as being energy efficient. Ensuring the security of these devices against …
times, as well as being energy efficient. Ensuring the security of these devices against …
A four-megabit compute-in-memory macro with eight-bit precision based on CMOS and resistive random-access memory for AI edge devices
JM Hung, CX Xue, HY Kao, YH Huang, FC Chang… - Nature …, 2021 - nature.com
Non-volatile computing-in-memory (nvCIM) architecture can reduce the latency and energy
consumption of artificial intelligence computation by minimizing the movement of data …
consumption of artificial intelligence computation by minimizing the movement of data …
Memristor-based binarized spiking neural networks: Challenges and applications
Memristive arrays are a natural fit to implement spiking neural network (SNN) acceleration.
Representing information as digital spiking events can improve noise margins and tolerance …
Representing information as digital spiking events can improve noise margins and tolerance …
29.1 A 40nm 64Kb 56.67 TOPS/W read-disturb-tolerant compute-in-memory/digital RRAM macro with active-feedback-based read and in-situ write verification
As memory-centric workloads (AI, graph-analytics) continue to gain momentum, technology
solutions that provide higher on-die memory capacity/bandwidth can provide scalability …
solutions that provide higher on-die memory capacity/bandwidth can provide scalability …