Hardware implementation of memristor-based artificial neural networks
Artificial Intelligence (AI) is currently experiencing a bloom driven by deep learning (DL)
techniques, which rely on networks of connected simple computing units operating in …
techniques, which rely on networks of connected simple computing units operating in …
Memristor-based hardware accelerators for artificial intelligence
Satisfying the rapid evolution of artificial intelligence (AI) algorithms requires exponential
growth in computing resources, which, in turn, presents huge challenges for deploying AI …
growth in computing resources, which, in turn, presents huge challenges for deploying AI …
[HTML][HTML] A survey on computationally efficient neural architecture search
Neural architecture search (NAS) has become increasingly popular in the deep learning
community recently, mainly because it can provide an opportunity to allow interested users …
community recently, mainly because it can provide an opportunity to allow interested users …
Neural architecture search for in-memory computing-based deep learning accelerators
The rapid growth of artificial intelligence and the increasing complexity of neural network
models are driving demand for efficient hardware architectures that can address power …
models are driving demand for efficient hardware architectures that can address power …
Mnsim 2.0: A behavior-level modeling tool for processing-in-memory architectures
In the age of artificial intelligence (AI), the huge data movements between memory and
computing units become the bottleneck of von Neumann architectures, ie, the “memory wall” …
computing units become the bottleneck of von Neumann architectures, ie, the “memory wall” …
CODEBench: A neural architecture and hardware accelerator co-design framework
Recently, automated co-design of machine learning (ML) models and accelerator
architectures has attracted significant attention from both the industry and academia …
architectures has attracted significant attention from both the industry and academia …
Towards efficient in-memory computing hardware for quantized neural networks: State-of-the-art, open challenges and perspectives
O Krestinskaya, L Zhang… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
The amount of data processed in the cloud, the development of Internet-of-Things (IoT)
applications, and growing data privacy concerns force the transition from cloud-based to …
applications, and growing data privacy concerns force the transition from cloud-based to …
Designing efficient bit-level sparsity-tolerant memristive networks
With the rapid progress of deep neural network (DNN) applications on memristive platforms,
there has been a growing interest in the acceleration and compression of memristive …
there has been a growing interest in the acceleration and compression of memristive …
[HTML][HTML] A memristive all-inclusive hypernetwork for parallel analog deployment of full search space architectures
In recent years, there has been a significant advancement in memristor-based neural
networks, positioning them as a pivotal processing-in-memory deployment architecture for a …
networks, positioning them as a pivotal processing-in-memory deployment architecture for a …
APQ: Automated DNN Pruning and Quantization for ReRAM-Based Accelerators
S Yang, S He, H Duan, W Chen… - … on Parallel and …, 2023 - ieeexplore.ieee.org
Emerging ReRAM-based accelerators support in-memory computation to accelerate deep
neural network (DNN) inference. Weight matrix pruning is a widely used technique to reduce …
neural network (DNN) inference. Weight matrix pruning is a widely used technique to reduce …