Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Deep learning in mobile and wireless networking: A survey
The rapid uptake of mobile devices and the rising popularity of mobile applications and
services pose unprecedented demands on mobile and wireless networking infrastructure …
services pose unprecedented demands on mobile and wireless networking infrastructure …
Deep learning for IoT big data and streaming analytics: A survey
M Mohammadi, A Al-Fuqaha, S Sorour… - … Surveys & Tutorials, 2018 - ieeexplore.ieee.org
In the era of the Internet of Things (IoT), an enormous amount of sensing devices collect
and/or generate various sensory data over time for a wide range of fields and applications …
and/or generate various sensory data over time for a wide range of fields and applications …
A 64-tile 2.4-Mb in-memory-computing CNN accelerator employing charge-domain compute
H Valavi, PJ Ramadge, E Nestler… - IEEE Journal of Solid …, 2019 - ieeexplore.ieee.org
Large-scale matrix-vector multiplications, which dominate in deep neural networks (DNNs),
are limited by data movement in modern VLSI technologies. This paper addresses data …
are limited by data movement in modern VLSI technologies. This paper addresses data …
UNPU: An energy-efficient deep neural network accelerator with fully variable weight bit precision
An energy-efficient deep neural network (DNN) accelerator, unified neural processing unit
(UNPU), is proposed for mobile deep learning applications. The UNPU can support both …
(UNPU), is proposed for mobile deep learning applications. The UNPU can support both …
CirCNN: accelerating and compressing deep neural networks using block-circulant weight matrices
Large-scale deep neural networks (DNNs) are both compute and memory intensive. As the
size of DNNs continues to grow, it is critical to improve the energy efficiency and …
size of DNNs continues to grow, it is critical to improve the energy efficiency and …
Admm-nn: An algorithm-hardware co-design framework of dnns using alternating direction methods of multipliers
Model compression is an important technique to facilitate efficient embedded and hardware
implementations of deep neural networks (DNNs), a number of prior works are dedicated to …
implementations of deep neural networks (DNNs), a number of prior works are dedicated to …
Packing sparse convolutional neural networks for efficient systolic array implementations: Column combining under joint optimization
This paper describes a novel approach of packing sparse convolutional neural networks into
a denser format for efficient implementations using systolic arrays. By combining multiple …
a denser format for efficient implementations using systolic arrays. By combining multiple …
Confuciux: Autonomous hardware resource assignment for dnn accelerators using reinforcement learning
DNN accelerators provide efficiency by leveraging reuse of activations/weights/outputs
during the DNN computations to reduce data movement from DRAM to the chip. The reuse is …
during the DNN computations to reduce data movement from DRAM to the chip. The reuse is …
Transfer learning for sEMG hand gestures recognition using convolutional neural networks
U Côté-Allard, CL Fall… - … on systems, man …, 2017 - ieeexplore.ieee.org
In the realm of surface electromyography (sEMG) gesture recognition, deep learning
algorithms are seldom employed. This is due in part to the large quantity of data required for …
algorithms are seldom employed. This is due in part to the large quantity of data required for …
Non-structured DNN weight pruning—Is it beneficial in any platform?
Large deep neural network (DNN) models pose the key challenge to energy efficiency due
to the significantly higher energy consumption of off-chip DRAM accesses than arithmetic or …
to the significantly higher energy consumption of off-chip DRAM accesses than arithmetic or …