Configurable precision neural network with differential binary non-volatile memory cell structure

WH Choi, PF Chiu, W Ma, M Lueker-boden - US Patent 10,643,705, 2020 - Google Patents
Use of a non-volatile memory array architecture to realize a neural network (BNN) allows for
matrix multiplication and accumulation to be performed within the memory array. A unit …

Method and system for performing analog complex vector-matrix multiplication

RM Hatcher, JA Kittl, BJ Obradovic… - US Patent 10,878,317, 2020 - Google Patents
A hardware device and method for performing a multiply-accumulate operation are
described. The device includes inputs lines, weight cells and output lines. The input lines …

Differential non-volatile memory cell for artificial neural network

PF Chiu, WH Choi, W Ma, M Lueker-boden - US Patent 10,643,119, 2020 - Google Patents
Use of a non-volatile memory array architecture to realize a neural network (BNN) allows for
matrix multiplication and accumulation to be performed within the memory array. A unit …

Realization of neural networks with ternary inputs and binary weights in NAND memory arrays

TT Hoang, WH Choi, M Lueker-boden - US Patent 11,170,290, 2021 - Google Patents
LLP (57) ABSTRACT Use of a NAND array architecture to realize a binary neural network
(BNN) allows for matrix multiplication and accu mulation to be performed within the memory …

Hardware accelerated discretized neural network

W Ma, PF Chiu, M Qin, WH Choi… - US Patent …, 2021 - Google Patents
An innovative low-bit-width device may include a first digital-to-analog converter (DAC), a
second DAC, a plural ity of non-volatile memory (NVM) weight arrays, one or more analog-to …

Realization of binary neural networks in NAND memory arrays

WH Choi, PF Chiu, W Ma, M Qin, GJ Hemink… - US Patent …, 2022 - Google Patents
Use of a NAND array architecture to realize a binary neural network (BNN) allows for matrix
multiplication and accu mulation to be performed within the memory array. A unit synapse for …

Realization of neural networks with ternary inputs and ternary weights in NAND memory arrays

TT Hoang, WH Choi, M Lueker-boden - US Patent 11,625,586, 2023 - Google Patents
US11625586B2 - Realization of neural networks with ternary inputs and ternary weights in
NAND memory arrays - Google Patents US11625586B2 - Realization of neural networks with …

Kernel transformation techniques to reduce power consumption of binary input, binary weight in-memory convolutional neural network inference engine

TT Hoang, WH Choi, M Lueker-boden - US Patent 11,657,259, 2023 - Google Patents
US11657259B2 - Kernel transformation techniques to reduce power consumption of binary
input, binary weight in-memory convolutional neural network inference engine - Google …

Accelerating sparse matrix multiplication in storage class memory-based convolutional neural network inference

TT Hoang, WH Choi, M Lueker-boden - US Patent 11,568,200, 2023 - Google Patents
Techniques are presented for accelerating in-memory matrix multiplication operations for a
convolution neural network (CNN) inference in which the weights of a filter are stored in the …

Convolution accelerator using in-memory computation

YY Lin, FM Lee - US Patent 11,562,229, 2023 - Google Patents
(57) ABSTRACT A method for accelerating a convolution of a kernel matrix over an input
matrix for computation of an output matrix using in-memory computation involves storing in …