Compute-in-memory chips for deep learning: Recent trends and prospects

S Yu, H Jiang, S Huang, X Peng… - IEEE circuits and systems …, 2021 - ieeexplore.ieee.org
Compute-in-memory (CIM) is a new computing paradigm that addresses the memory-wall
problem in hardware accelerator design for deep learning. The input vector and weight …

Emerging memristive artificial synapses and neurons for energy‐efficient neuromorphic computing

S Choi, J Yang, G Wang - Advanced Materials, 2020 - Wiley Online Library
Memristors have recently attracted significant interest due to their applicability as promising
building blocks of neuromorphic computing and electronic systems. The dynamic …

2022 roadmap on neuromorphic computing and engineering

DV Christensen, R Dittmann… - Neuromorphic …, 2022 - iopscience.iop.org
Modern computation based on von Neumann architecture is now a mature cutting-edge
science. In the von Neumann architecture, processing and memory units are implemented …

Equivalent-accuracy accelerated neural-network training using analogue memory

S Ambrogio, P Narayanan, H Tsai, RM Shelby, I Boybat… - Nature, 2018 - nature.com
Neural-network training can be slow and energy intensive, owing to the need to transfer the
weight data for the network between conventional digital memory chips and processor chips …

Neuro-inspired computing with emerging nonvolatile memorys

S Yu - Proceedings of the IEEE, 2018 - ieeexplore.ieee.org
This comprehensive review summarizes state of the art, challenges, and prospects of the
neuro-inspired computing with emerging nonvolatile memory devices. First, we discuss the …

Neuromorphic computing using non-volatile memory

GW Burr, RM Shelby, A Sebastian, S Kim… - … in Physics: X, 2017 - Taylor & Francis
Dense crossbar arrays of non-volatile memory (NVM) devices represent one possible path
for implementing massively-parallel and highly energy-efficient neuromorphic computing …

Ferroelectric FET analog synapse for acceleration of deep neural network training

M Jerry, PY Chen, J Zhang, P Sharma… - 2017 IEEE …, 2017 - ieeexplore.ieee.org
The memory requirement of at-scale deep neural networks (DNN) dictate that synaptic
weight values be stored and updated in off-chip memory such as DRAM, limiting the energy …

NeuroSim: A circuit-level macro model for benchmarking neuro-inspired architectures in online learning

PY Chen, X Peng, S Yu - IEEE Transactions on Computer …, 2018 - ieeexplore.ieee.org
Neuro-inspired architectures based on synaptic memory arrays have been proposed for on-
chip acceleration of weighted sum and weight update in machine/deep learning algorithms …

[HTML][HTML] Reliability of analog resistive switching memory for neuromorphic computing

M Zhao, B Gao, J Tang, H Qian, H Wu - Applied Physics Reviews, 2020 - pubs.aip.org
As artificial intelligence calls for novel energy-efficient hardware, neuromorphic computing
systems based on analog resistive switching memory (RSM) devices have drawn great …

SiGe epitaxial memory for neuromorphic computing with reproducible high performance based on engineered dislocations

S Choi, SH Tan, Z Li, Y Kim, C Choi, PY Chen… - Nature materials, 2018 - nature.com
Although several types of architecture combining memory cells and transistors have been
used to demonstrate artificial synaptic arrays, they usually present limited scalability and …