Model compression and hardware acceleration for neural networks: A comprehensive survey
Domain-specific hardware is becoming a promising topic in the backdrop of improvement
slow down for general-purpose processors due to the foreseeable end of Moore's Law …
slow down for general-purpose processors due to the foreseeable end of Moore's Law …
Toward memristive in-memory computing: principles and applications
H Bao, H Zhou, J Li, H Pei, J Tian, L Yang… - Frontiers of …, 2022 - Springer
With the rapid growth of computer science and big data, the traditional von Neumann
architecture suffers the aggravating data communication costs due to the separated structure …
architecture suffers the aggravating data communication costs due to the separated structure …
Automatic diagnosis of COVID-19 with MCA-inspired TQWT-based classification of chest X-ray images
In this era of Coronavirus disease 2019 (COVID-19), an accurate method of diagnosis with
less diagnosis time and cost can effectively help in controlling the disease spread with the …
less diagnosis time and cost can effectively help in controlling the disease spread with the …
Felix: A ferroelectric fet based low power mixed-signal in-memory architecture for dnn acceleration
Today, a large number of applications depend on deep neural networks (DNN) to process
data and perform complicated tasks at restricted power and latency specifications …
data and perform complicated tasks at restricted power and latency specifications …
An energy-efficient quantized and regularized training framework for processing-in-memory accelerators
Convolutional Neural Networks (CNNs) have made breakthroughs in various fields, while
the energy consumption becomes enormous. Processing-In-Memory (PIM) architectures …
the energy consumption becomes enormous. Processing-In-Memory (PIM) architectures …
NAS4RRAM: neural network architecture search for inference on RRAM-based accelerators
The RRAM-based accelerators enable fast and energy-efficient inference for neural
networks. However, there are some requirements to deploy neural networks on RRAM …
networks. However, there are some requirements to deploy neural networks on RRAM …
Organic memristor with synaptic plasticity for neuromorphic computing applications
J Zeng, X Chen, S Liu, Q Chen, G Liu - Nanomaterials, 2023 - mdpi.com
Memristors have been considered to be more efficient than traditional Complementary Metal
Oxide Semiconductor (CMOS) devices in implementing artificial synapses, which are …
Oxide Semiconductor (CMOS) devices in implementing artificial synapses, which are …
Enabling secure nvm-based in-memory neural network computing by sparse fast gradient encryption
Neural network (NN) computing is energy-consuming on traditional computing systems,
owing to the inherent memory wall bottleneck of the von Neumann architecture and the …
owing to the inherent memory wall bottleneck of the von Neumann architecture and the …
Sme: Reram-based sparse-multiplication-engine to squeeze-out bit sparsity of neural network
Resistive Random-Access-Memory (ReRAM) cross-bar is a promising technique for deep
neural network (DNN) accelerators, thanks to its in-memory and in-situ analog computing …
neural network (DNN) accelerators, thanks to its in-memory and in-situ analog computing …
Review of security techniques for memristor computing systems
Neural network (NN) algorithms have become the dominant tool in visual object recognition,
natural language processing, and robotics. To enhance the computational efficiency of these …
natural language processing, and robotics. To enhance the computational efficiency of these …