Hardware implementation of memristor-based artificial neural networks

F Aguirre, A Sebastian, M Le Gallo, W Song… - Nature …, 2024 - nature.com
Artificial Intelligence (AI) is currently experiencing a bloom driven by deep learning (DL)
techniques, which rely on networks of connected simple computing units operating in …

Memristors—From in‐memory computing, deep learning acceleration, and spiking neural networks to the future of neuromorphic and bio‐inspired computing

A Mehonic, A Sebastian, B Rajendran… - Advanced Intelligent …, 2020 - Wiley Online Library
Machine learning, particularly in the form of deep learning (DL), has driven most of the
recent fundamental developments in artificial intelligence (AI). DL is based on computational …

2022 roadmap on neuromorphic computing and engineering

DV Christensen, R Dittmann… - Neuromorphic …, 2022 - iopscience.iop.org
Modern computation based on von Neumann architecture is now a mature cutting-edge
science. In the von Neumann architecture, processing and memory units are implemented …

Reconfigurable perovskite nickelate electronics for artificial intelligence

HT Zhang, TJ Park, ANMN Islam, DSJ Tran, S Manna… - Science, 2022 - science.org
Reconfigurable devices offer the ability to program electronic circuits on demand. In this
work, we demonstrated on-demand creation of artificial neurons, synapses, and memory …

Accurate deep neural network inference using computational phase-change memory

V Joshi, M Le Gallo, S Haefeli, I Boybat… - Nature …, 2020 - nature.com
In-memory computing using resistive memory devices is a promising non-von Neumann
approach for making energy-efficient deep learning inference hardware. However, due to …

[HTML][HTML] Analog architectures for neural network acceleration based on non-volatile memory

TP **ao, CH Bennett, B Feinberg, S Agarwal… - Applied Physics …, 2020 - pubs.aip.org
Analog hardware accelerators, which perform computation within a dense memory array,
have the potential to overcome the major bottlenecks faced by digital hardware for data …

Compute in‐memory with non‐volatile elements for neural networks: A review from a co‐design perspective

W Haensch, A Raghunathan, K Roy… - Advanced …, 2023 - Wiley Online Library
Deep learning has become ubiquitous, touching daily lives across the globe. Today,
traditional computer architectures are stressed to their limits in efficiently executing the …

Bulk‐Switching Memristor‐Based Compute‐In‐Memory Module for Deep Neural Network Training

Y Wu, Q Wang, Z Wang, X Wang, B Ayyagari… - Advanced …, 2023 - Wiley Online Library
The constant drive to achieve higher performance in deep neural networks (DNNs) has led
to the proliferation of very large models. Model training, however, requires intensive …

A memristive deep belief neural network based on silicon synapses

W Wang, L Danial, Y Li, E Herbelin, E Pikhay… - Nature …, 2022 - nature.com
Memristor-based neuromorphic computing could overcome the limitations of traditional von
Neumann computing architectures—in which data are shuffled between separate memory …

Read-optimized 28nm hkmg multibit fefet synapses for inference-engine applications

S De, F Müller, HH Le, M Lederer… - IEEE Journal of the …, 2022 - ieeexplore.ieee.org
This paper reports 2bits/cell ferroelectric FET (FeFET) devices with 500 ns write pulse of
maximum amplitude 4.5 V for inference-engine applications. FeFET devices were fabricated …