Compute-in-memory chips for deep learning: Recent trends and prospects
Compute-in-memory (CIM) is a new computing paradigm that addresses the memory-wall
problem in hardware accelerator design for deep learning. The input vector and weight …
problem in hardware accelerator design for deep learning. The input vector and weight …
Emerging memristive artificial synapses and neurons for energy‐efficient neuromorphic computing
Memristors have recently attracted significant interest due to their applicability as promising
building blocks of neuromorphic computing and electronic systems. The dynamic …
building blocks of neuromorphic computing and electronic systems. The dynamic …
2022 roadmap on neuromorphic computing and engineering
Modern computation based on von Neumann architecture is now a mature cutting-edge
science. In the von Neumann architecture, processing and memory units are implemented …
science. In the von Neumann architecture, processing and memory units are implemented …
Equivalent-accuracy accelerated neural-network training using analogue memory
Neural-network training can be slow and energy intensive, owing to the need to transfer the
weight data for the network between conventional digital memory chips and processor chips …
weight data for the network between conventional digital memory chips and processor chips …
Neuro-inspired computing with emerging nonvolatile memorys
S Yu - Proceedings of the IEEE, 2018 - ieeexplore.ieee.org
This comprehensive review summarizes state of the art, challenges, and prospects of the
neuro-inspired computing with emerging nonvolatile memory devices. First, we discuss the …
neuro-inspired computing with emerging nonvolatile memory devices. First, we discuss the …
Neuromorphic computing using non-volatile memory
Dense crossbar arrays of non-volatile memory (NVM) devices represent one possible path
for implementing massively-parallel and highly energy-efficient neuromorphic computing …
for implementing massively-parallel and highly energy-efficient neuromorphic computing …
Ferroelectric FET analog synapse for acceleration of deep neural network training
The memory requirement of at-scale deep neural networks (DNN) dictate that synaptic
weight values be stored and updated in off-chip memory such as DRAM, limiting the energy …
weight values be stored and updated in off-chip memory such as DRAM, limiting the energy …
NeuroSim: A circuit-level macro model for benchmarking neuro-inspired architectures in online learning
Neuro-inspired architectures based on synaptic memory arrays have been proposed for on-
chip acceleration of weighted sum and weight update in machine/deep learning algorithms …
chip acceleration of weighted sum and weight update in machine/deep learning algorithms …
[HTML][HTML] Reliability of analog resistive switching memory for neuromorphic computing
As artificial intelligence calls for novel energy-efficient hardware, neuromorphic computing
systems based on analog resistive switching memory (RSM) devices have drawn great …
systems based on analog resistive switching memory (RSM) devices have drawn great …
SiGe epitaxial memory for neuromorphic computing with reproducible high performance based on engineered dislocations
Although several types of architecture combining memory cells and transistors have been
used to demonstrate artificial synaptic arrays, they usually present limited scalability and …
used to demonstrate artificial synaptic arrays, they usually present limited scalability and …