Research progress on memristor: From synapses to computing systems
As the limits of transistor technology are approached, feature size in integrated circuit
transistors has been reduced very near to the minimum physically-realizable channel length …
transistors has been reduced very near to the minimum physically-realizable channel length …
Testability and dependability of AI hardware: Survey, trends, challenges, and perspectives
Hardware realization of artificial intelligence (AI) requires new design styles and even
underlying technologies than those used in traditional digital processors or logic circuits …
underlying technologies than those used in traditional digital processors or logic circuits …
[HTML][HTML] Analog architectures for neural network acceleration based on non-volatile memory
Analog hardware accelerators, which perform computation within a dense memory array,
have the potential to overcome the major bottlenecks faced by digital hardware for data …
have the potential to overcome the major bottlenecks faced by digital hardware for data …
MNSIM 2.0: A behavior-level modeling tool for memristor-based neuromorphic computing systems
Memristor based neuromorphic computing systems give alternative solutions to boost the
computing energy efficiency of Neural Network (NN) algorithms. Because of the large-scale …
computing energy efficiency of Neural Network (NN) algorithms. Because of the large-scale …
Mnsim 2.0: A behavior-level modeling tool for processing-in-memory architectures
In the age of artificial intelligence (AI), the huge data movements between memory and
computing units become the bottleneck of von Neumann architectures, ie, the “memory wall” …
computing units become the bottleneck of von Neumann architectures, ie, the “memory wall” …
Combined HW/SW drift and variability mitigation for PCM-based analog in-memory computing for neural network applications
A Antolini, C Paolino, F Zavalloni, A Lico… - IEEE Journal on …, 2023 - ieeexplore.ieee.org
Matrix-Vector Multiplications (MVMs) represent a heavy workload for both training and
inference in Deep Neural Networks (DNNs) applications. Analog In-memory Computing …
inference in Deep Neural Networks (DNNs) applications. Analog In-memory Computing …
On the accuracy of analog neural network inference accelerators
Specialized accelerators have recently garnered attention as a method to reduce the power
consumption of neural network inference. A promising category of accelerators utilizes …
consumption of neural network inference. A promising category of accelerators utilizes …
HARDSEA: Hybrid analog-ReRAM clustering and digital-SRAM in-memory computing accelerator for dynamic sparse self-attention in transformer
Self-attention-based transformers have outperformed recurrent and convolutional neural
networks (RNN/CNNs) in many applications. Despite the effectiveness, calculating self …
networks (RNN/CNNs) in many applications. Despite the effectiveness, calculating self …
Sparse attention acceleration with synergistic in-memory pruning and on-chip recomputation
As its core computation, a self-attention mechanism gauges pairwise correlations across the
entire input sequence. Despite favorable performance, calculating pairwise correlations is …
entire input sequence. Despite favorable performance, calculating pairwise correlations is …
Multi-objective optimization of ReRAM crossbars for robust DNN inferencing under stochastic noise
Resistive random-access memory (ReRAM) is a promising technology for designing
hardware accelerators for deep neural network (DNN) inferencing. However, stochastic …
hardware accelerators for deep neural network (DNN) inferencing. However, stochastic …