Research progress on memristor: From synapses to computing systems

X Yang, B Taylor, A Wu, Y Chen… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
As the limits of transistor technology are approached, feature size in integrated circuit
transistors has been reduced very near to the minimum physically-realizable channel length …

Testability and dependability of AI hardware: Survey, trends, challenges, and perspectives

F Su, C Liu, HG Stratigopoulos - IEEE Design & Test, 2023 - ieeexplore.ieee.org
Hardware realization of artificial intelligence (AI) requires new design styles and even
underlying technologies than those used in traditional digital processors or logic circuits …

[HTML][HTML] Analog architectures for neural network acceleration based on non-volatile memory

TP **ao, CH Bennett, B Feinberg, S Agarwal… - Applied Physics …, 2020 - pubs.aip.org
Analog hardware accelerators, which perform computation within a dense memory array,
have the potential to overcome the major bottlenecks faced by digital hardware for data …

MNSIM 2.0: A behavior-level modeling tool for memristor-based neuromorphic computing systems

Z Zhu, H Sun, K Qiu, L **a, G Krishnan, G Dai… - Proceedings of the …, 2020 - dl.acm.org
Memristor based neuromorphic computing systems give alternative solutions to boost the
computing energy efficiency of Neural Network (NN) algorithms. Because of the large-scale …

Mnsim 2.0: A behavior-level modeling tool for processing-in-memory architectures

Z Zhu, H Sun, T **e, Y Zhu, G Dai, L **a… - IEEE transactions on …, 2023 - ieeexplore.ieee.org
In the age of artificial intelligence (AI), the huge data movements between memory and
computing units become the bottleneck of von Neumann architectures, ie, the “memory wall” …

Combined HW/SW drift and variability mitigation for PCM-based analog in-memory computing for neural network applications

A Antolini, C Paolino, F Zavalloni, A Lico… - IEEE Journal on …, 2023 - ieeexplore.ieee.org
Matrix-Vector Multiplications (MVMs) represent a heavy workload for both training and
inference in Deep Neural Networks (DNNs) applications. Analog In-memory Computing …

On the accuracy of analog neural network inference accelerators

TP **ao, B Feinberg, CH Bennett… - IEEE Circuits and …, 2022 - ieeexplore.ieee.org
Specialized accelerators have recently garnered attention as a method to reduce the power
consumption of neural network inference. A promising category of accelerators utilizes …

HARDSEA: Hybrid analog-ReRAM clustering and digital-SRAM in-memory computing accelerator for dynamic sparse self-attention in transformer

S Liu, C Mu, H Jiang, Y Wang, J Zhang… - … Transactions on Very …, 2023 - ieeexplore.ieee.org
Self-attention-based transformers have outperformed recurrent and convolutional neural
networks (RNN/CNNs) in many applications. Despite the effectiveness, calculating self …

Sparse attention acceleration with synergistic in-memory pruning and on-chip recomputation

A Yazdanbakhsh, A Moradifirouzabadi… - 2022 55th IEEE/ACM …, 2022 - ieeexplore.ieee.org
As its core computation, a self-attention mechanism gauges pairwise correlations across the
entire input sequence. Despite favorable performance, calculating pairwise correlations is …

Multi-objective optimization of ReRAM crossbars for robust DNN inferencing under stochastic noise

X Yang, S Belakaria, BK Joardar… - 2021 IEEE/ACM …, 2021 - ieeexplore.ieee.org
Resistive random-access memory (ReRAM) is a promising technology for designing
hardware accelerators for deep neural network (DNN) inferencing. However, stochastic …