Compute in‐memory with non‐volatile elements for neural networks: A review from a co‐design perspective

W Haensch, A Raghunathan, K Roy… - Advanced …, 2023 - Wiley Online Library
Deep learning has become ubiquitous, touching daily lives across the globe. Today,
traditional computer architectures are stressed to their limits in efficiently executing the …

Architecture of computing system based on chiplet

G Shan, Y Zheng, C **ng, D Chen, G Li, Y Yang - Micromachines, 2022 - mdpi.com
Computing systems are widely used in medical diagnosis, climate prediction, autonomous
vehicles, etc. As the key part of electronics, the performance of computing systems is crucial …

X-former: In-memory acceleration of transformers

S Sridharan, JR Stevens, K Roy… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Transformers have achieved great success in a wide variety of natural language processing
(NLP) tasks due to the self-attention mechanism, which assigns an importance score for …

ACE-SNN: Algorithm-hardware co-design of energy-efficient & low-latency deep spiking neural networks for 3d image recognition

G Datta, S Kundu, AR Jaiswal, PA Beerel - Frontiers in neuroscience, 2022 - frontiersin.org
High-quality 3D image recognition is an important component of many vision and robotics
systems. However, the accurate processing of these images requires the use of compute …

Compute-in-memory technologies and architectures for deep learning workloads

M Ali, S Roy, U Saxena, T Sharma… - … Transactions on Very …, 2022 - ieeexplore.ieee.org
The use of deep learning (DL) to real-world applications, such as computer vision, speech
recognition, and robotics, has become ubiquitous. This can be largely attributed to a virtuous …

Samba: Sparsity aware in-memory computing based machine learning accelerator

DE Kim, A Ankit, C Wang, K Roy - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Machine Learning (ML) inference is typically dominated by highly data-intensive Matrix
Vector Multiplication (MVM) computations that may be constrained by memory bottleneck …

Towards ADC-less compute-in-memory accelerators for energy efficient deep learning

U Saxena, I Chakraborty, K Roy - 2022 Design, Automation & …, 2022 - ieeexplore.ieee.org
Compute-in-Memory (CiM) hardware has shown great potential in accelerating Deep Neural
Networks (DNNs). However, most CiM accelerators for matrix vector multiplication rely on …

Design space and memory technology co-exploration for in-memory computing based machine learning accelerators

K He, I Chakraborty, C Wang, K Roy - Proceedings of the 41st IEEE/ACM …, 2022 - dl.acm.org
In-Memory Computing (IMC) has become a promising paradigm for accelerating machine
learning (ML) inference. While IMC architectures built on various memory technologies have …

E-upq: Energy-aware unified pruning-quantization framework for cim architecture

CY Chang, KC Chou, YC Chuang… - IEEE Journal on …, 2023 - ieeexplore.ieee.org
The wide adoption of convolutional neural networks (CNNs) in many applications has given
rise to unrelenting computational demand and memory requirements. Computing-in-Memory …

Commodity bit-cell sponsored MRAM interaction design for binary neural network

H Cai, Z Bian, Z Fan, B Liu… - IEEE Transactions on …, 2021 - ieeexplore.ieee.org
Binary neural networks (BNNs) can transform multiply-and-accumulate (MAC) operations
into XNOR and accumulation (XAC), which has been proven to greatly reduce the hardware …