A survey of SRAM-based in-memory computing techniques and applications
As von Neumann computing architectures become increasingly constrained by data-
movement overheads, researchers have started exploring in-memory computing (IMC) …
movement overheads, researchers have started exploring in-memory computing (IMC) …
A systematic literature review on binary neural networks
R Sayed, H Azmi, H Shawkey, AH Khalil… - IEEE Access, 2023 - ieeexplore.ieee.org
This paper presents an extensive literature review on Binary Neural Network (BNN). BNN
utilizes binary weights and activation function parameters to substitute the full-precision …
utilizes binary weights and activation function parameters to substitute the full-precision …
Evaluating machine learningworkloads on memory-centric computing systems
Training machine learning (ML) algorithms is a computationally intensive process, which is
frequently memory-bound due to repeatedly accessing large training datasets. As a result …
frequently memory-bound due to repeatedly accessing large training datasets. As a result …
CIMAT: A compute-in-memory architecture for on-chip training based on transpose SRAM arrays
Rapid development in deep neural networks (DNNs) is enabling many intelligent
applications. However, on-chip training of DNNs is challenging due to the extensive …
applications. However, on-chip training of DNNs is challenging due to the extensive …
Accelerating deep neural network in-situ training with non-volatile and volatile memory based hybrid precision synapses
Compute-in-memory (CIM) with emerging non-volatile memories (eNVMs) is time and
energy efficient for deep neural network (DNN) inference. However, challenges still remain …
energy efficient for deep neural network (DNN) inference. However, challenges still remain …
A two-way SRAM array based accelerator for deep neural network on-chip training
On-chip training of large-scale deep neural networks (DNNs) is challenging due to
computational complexity and resource limitation. Compute-in-memory (CIM) architecture …
computational complexity and resource limitation. Compute-in-memory (CIM) architecture …
A review on SRAM-based computing in-memory: Circuits, functions, and applications
Z Lin, Z Tong, J Zhang, F Wang, T Xu… - Journal of …, 2022 - iopscience.iop.org
Artificial intelligence (AI) processes data-centric applications with minimal effort. However, it
poses new challenges to system design in terms of computational speed and energy …
poses new challenges to system design in terms of computational speed and energy …
An Experimental Evaluation of Machine Learning Training on a Real Processing-in-Memory System
Training machine learning (ML) algorithms is a computationally intensive process, which is
frequently memory-bound due to repeatedly accessing large training datasets. As a result …
frequently memory-bound due to repeatedly accessing large training datasets. As a result …
Cambricon-u: A systolic random increment memory architecture for unary computing
Unary computing, whose arithmetics require only one logic gate, has enabled efficient DNN
processing, especially on strictly power-constrained devices. However, unary computing still …
processing, especially on strictly power-constrained devices. However, unary computing still …
PIM-Opt: Demystifying Distributed Optimization Algorithms on a Real-World Processing-In-Memory System
Modern Machine Learning (ML) training on large-scale datasets is a very time-consuming
workload. It relies on the optimization algorithm Stochastic Gradient Descent (SGD) due to …
workload. It relies on the optimization algorithm Stochastic Gradient Descent (SGD) due to …