Memristive technologies for data storage, computation, encryption, and radio-frequency communication
Memristive devices, which combine a resistor with memory functions such that voltage
pulses can change their resistance (and hence their memory state) in a nonvolatile manner …
pulses can change their resistance (and hence their memory state) in a nonvolatile manner …
Generative AI for the optimization of next-generation wireless networks: Basics, state-of-the-art, and open challenges
Next-generation (xG) wireless networks, with their complex and dynamic nature, present
significant challenges to using traditional optimization techniques. Generative Artificial …
significant challenges to using traditional optimization techniques. Generative Artificial …
[HTML][HTML] A compute-in-memory chip based on resistive random-access memory
Realizing increasingly complex artificial intelligence (AI) functionalities directly on edge
devices calls for unprecedented energy efficiency of edge hardware. Compute-in-memory …
devices calls for unprecedented energy efficiency of edge hardware. Compute-in-memory …
A survey on deep learning hardware accelerators for heterogeneous hpc platforms
Recent trends in deep learning (DL) imposed hardware accelerators as the most viable
solution for several classes of high-performance computing (HPC) applications such as …
solution for several classes of high-performance computing (HPC) applications such as …
Mixed-signal computing for deep neural network inference
B Murmann - IEEE Transactions on Very Large Scale …, 2020 - ieeexplore.ieee.org
Modern deep neural networks (DNNs) require billions of multiply-accumulate operations per
inference. Given that these computations demand relatively low precision, it is feasible to …
inference. Given that these computations demand relatively low precision, it is feasible to …
PIMCA: A programmable in-memory computing accelerator for energy-efficient DNN inference
This article presents a programmable in-memory computing accelerator (PIMCA) for low-
precision (1–2 b) deep neural network (DNN) inference. The custom 10T1C bitcell in the in …
precision (1–2 b) deep neural network (DNN) inference. The custom 10T1C bitcell in the in …
Filament Engineering of Two‐Dimensional h‐BN for a Self‐Power Mechano‐Nociceptor System
G Ding, RS Chen, P **e, B Yang, G Shang, Y Liu… - Small, 2022 - Wiley Online Library
The switching variability caused by intrinsic stochasticity of the ionic/atomic motions during
the conductive filaments (CFs) formation process largely limits the applications of diffusive …
the conductive filaments (CFs) formation process largely limits the applications of diffusive …
Nn-baton: Dnn workload orchestration and chiplet granularity exploration for multichip accelerators
The revolution of machine learning poses an unprecedented demand for computation
resources, urging more transistors on a single monolithic chip, which is not sustainable in …
resources, urging more transistors on a single monolithic chip, which is not sustainable in …
CHIMERA: A 0.92-TOPS, 2.2-TOPS/W edge AI accelerator with 2-MByte on-chip foundry resistive RAM for efficient training and inference
Implementing edge artificial intelligence (AI) inference and training is challenging with
current memory technologies. As deep neural networks (DNNs) grow in size, this problem is …
current memory technologies. As deep neural networks (DNNs) grow in size, this problem is …
A 95.6-TOPS/W deep learning inference accelerator with per-vector scaled 4-bit quantization in 5 nm
The energy efficiency of deep neural network (DNN) inference can be improved with custom
accelerators. DNN inference accelerators often employ specialized hardware techniques to …
accelerators. DNN inference accelerators often employ specialized hardware techniques to …