Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
A survey on deep neural network pruning: Taxonomy, comparison, analysis, and recommendations
Modern deep neural networks, particularly recent large language models, come with
massive model sizes that require significant computational and storage resources. To …
massive model sizes that require significant computational and storage resources. To …
Efficient acceleration of deep learning inference on resource-constrained edge devices: A review
Successful integration of deep neural networks (DNNs) or deep learning (DL) has resulted
in breakthroughs in many areas. However, deploying these highly accurate models for data …
in breakthroughs in many areas. However, deploying these highly accurate models for data …
Sparsity in deep learning: Pruning and growth for efficient inference and training in neural networks
The growing energy and performance costs of deep learning have driven the community to
reduce the size of neural networks by selectively pruning components. Similarly to their …
reduce the size of neural networks by selectively pruning components. Similarly to their …
Pruning and quantization for deep neural network acceleration: A survey
T Liang, J Glossner, L Wang, S Shi, X Zhang - Neurocomputing, 2021 - Elsevier
Deep neural networks have been applied in many applications exhibiting extraordinary
abilities in the field of computer vision. However, complex network architectures challenge …
abilities in the field of computer vision. However, complex network architectures challenge …
AutoML: A survey of the state-of-the-art
Deep learning (DL) techniques have obtained remarkable achievements on various tasks,
such as image recognition, object detection, and language modeling. However, building a …
such as image recognition, object detection, and language modeling. However, building a …
On the efficacy of knowledge distillation
JH Cho, B Hariharan - Proceedings of the IEEE/CVF …, 2019 - openaccess.thecvf.com
In this paper, we present a thorough evaluation of the efficacy of knowledge distillation and
its dependence on student and teacher architectures. Starting with the observation that more …
its dependence on student and teacher architectures. Starting with the observation that more …
Green ai
Green AI Page 1 54 COMMUNICATIONS OF THE ACM | DECEMBER 2020 | VOL. 63 | NO.
12 contributed articles ILL US TRA TION B Y LIS A SHEEHAN DOI:10.1145/3381831 …
12 contributed articles ILL US TRA TION B Y LIS A SHEEHAN DOI:10.1145/3381831 …
Importance estimation for neural network pruning
Structural pruning of neural network parameters reduces computational, energy, and
memory transfer costs during inference. We propose a novel method that estimates the …
memory transfer costs during inference. We propose a novel method that estimates the …
Edge intelligence: Empowering intelligence to the edge of network
Edge intelligence refers to a set of connected systems and devices for data collection,
caching, processing, and analysis proximity to where data are captured based on artificial …
caching, processing, and analysis proximity to where data are captured based on artificial …
Mixhop: Higher-order graph convolutional architectures via sparsified neighborhood mixing
Existing popular methods for semi-supervised learning with Graph Neural Networks (such
as the Graph Convolutional Network) provably cannot learn a general class of …
as the Graph Convolutional Network) provably cannot learn a general class of …