Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Maeri: Enabling flexible dataflow map** over dnn accelerators via reconfigurable interconnects
Deep neural networks (DNN) have demonstrated highly promising results across computer
vision and speech recognition, and are becoming foundational for ubiquitous AI. The …
vision and speech recognition, and are becoming foundational for ubiquitous AI. The …
ReRAM-based processing-in-memory architecture for recurrent neural network acceleration
We present a recurrent neural network (RNN) accelerator design with resistive random-
access memory (ReRAM)-based processing-in-memory (PIM) architecture. Distinguished …
access memory (ReRAM)-based processing-in-memory (PIM) architecture. Distinguished …
FlexBlock: A flexible DNN training accelerator with multi-mode block floating point support
When training deep neural networks (DNNs), expensive floating point arithmetic units are
used in GPUs or custom neural processing units (NPUs). To reduce the burden of floating …
used in GPUs or custom neural processing units (NPUs). To reduce the burden of floating …
Approximate computing for long short term memory (LSTM) neural networks
Long Short Term Memory (LSTM) networks are a class of recurrent neural networks that are
widely used for machine learning tasks involving sequences, including machine translation …
widely used for machine learning tasks involving sequences, including machine translation …
[HTML][HTML] Approximate LSTM computing for energy-efficient speech recognition
This paper presents an approximate computing method of long short-term memory (LSTM)
operations for energy-efficient end-to-end speech recognition. We newly introduce the …
operations for energy-efficient end-to-end speech recognition. We newly introduce the …
A power-aware digital multilayer perceptron accelerator with on-chip training based on approximate computing
This paper proposes that approximation by reducing bit-precision and using inexact
multiplier can save power consumption of digital multilayer perceptron accelerator during …
multiplier can save power consumption of digital multilayer perceptron accelerator during …
Adaptive weight compression for memory-efficient neural networks
Neural networks generally require significant memory capacity/bandwidth to store/access a
large number of synaptic weights. This paper presents an application of JPEG image …
large number of synaptic weights. This paper presents an application of JPEG image …
Memory-reduced network stacking for edge-level CNN architecture with structured weight pruning
This paper presents a novel stacking and multi-level indexing scheme for convolutional
neural networks (CNNs) used in energy-limited edge-level systems. Basically, the proposed …
neural networks (CNNs) used in energy-limited edge-level systems. Basically, the proposed …
Genesys: Enabling continuous learning through neural network evolution in hardware
Modern deep learning systems rely on (a) a hand-tuned neural network topology,(b)
massive amounts of labeled training data, and (c) extensive training over large-scale …
massive amounts of labeled training data, and (c) extensive training over large-scale …
Design and analysis of a neural network inference engine based on adaptive weight compression
Neural networks generally require significant memory capacity/bandwidth to store/access a
large number of synaptic weights. This paper presents design of an energy-efficient neural …
large number of synaptic weights. This paper presents design of an energy-efficient neural …