Compute-in-memory chips for deep learning: Recent trends and prospects
Compute-in-memory (CIM) is a new computing paradigm that addresses the memory-wall
problem in hardware accelerator design for deep learning. The input vector and weight …
problem in hardware accelerator design for deep learning. The input vector and weight …
Spiking neural network integrated circuits: A review of trends and future directions
The rapid growth of deep learning, spurred by its successes in various fields ranging from
face recognition [1] to game playing [2], has also triggered a growing interest in the design of …
face recognition [1] to game playing [2], has also triggered a growing interest in the design of …
Edge learning using a fully integrated neuro-inspired memristor chip
Learning is highly important for edge intelligence devices to adapt to different application
scenes and owners. Current technologies for training neural networks require moving …
scenes and owners. Current technologies for training neural networks require moving …
A compute-in-memory chip based on resistive random-access memory
Realizing increasingly complex artificial intelligence (AI) functionalities directly on edge
devices calls for unprecedented energy efficiency of edge hardware. Compute-in-memory …
devices calls for unprecedented energy efficiency of edge hardware. Compute-in-memory …
A memristor-based analogue reservoir computing system for real-time and power-efficient signal processing
Reservoir computing offers a powerful neuromorphic computing architecture for
spatiotemporal signal processing. To boost the power efficiency of the hardware …
spatiotemporal signal processing. To boost the power efficiency of the hardware …
2022 roadmap on neuromorphic computing and engineering
Modern computation based on von Neumann architecture is now a mature cutting-edge
science. In the von Neumann architecture, processing and memory units are implemented …
science. In the von Neumann architecture, processing and memory units are implemented …
A CMOS-integrated spintronic compute-in-memory macro for secure AI edge devices
YC Chiu, WS Khwa, CS Yang, SH Teng, HY Huang… - Nature …, 2023 - nature.com
Artificial intelligence edge devices should offer high inference accuracy and rapid response
times, as well as being energy efficient. Ensuring the security of these devices against …
times, as well as being energy efficient. Ensuring the security of these devices against …
Fusion of memristor and digital compute-in-memory processing for energy-efficient edge computing
Artificial intelligence (AI) edge devices prefer employing high-capacity nonvolatile compute-
in-memory (CIM) to achieve high energy efficiency and rapid wakeup-to-response with …
in-memory (CIM) to achieve high energy efficiency and rapid wakeup-to-response with …
DYNAP-SE2: a scalable multi-core dynamic neuromorphic asynchronous spiking neural network processor
With the remarkable progress that technology has made, the need for processing data near
the sensors at the edge has increased dramatically. The electronic systems used in these …
the sensors at the edge has increased dramatically. The electronic systems used in these …
Multi-level, forming and filament free, bulk switching trilayer RRAM for neuromorphic computing at the edge
CMOS-RRAM integration holds great promise for low energy and high throughput
neuromorphic computing. However, most RRAM technologies relying on filamentary …
neuromorphic computing. However, most RRAM technologies relying on filamentary …