Group sparsity: The hinge between filter pruning and decomposition for network compression

Y Li, S Gu, C Mayer, LV Gool… - Proceedings of the …, 2020 - openaccess.thecvf.com
In this paper, we analyze two popular network compression techniques, ie filter pruning and
low-rank decomposition, in a unified sense. By simply changing the way the sparsity …

Toward compact convnets via structure-sparsity regularized filter pruning

S Lin, R Ji, Y Li, C Deng, X Li - IEEE transactions on neural …, 2019 - ieeexplore.ieee.org
The success of convolutional neural networks (CNNs) in computer vision applications has
been accompanied by a significant increase of computation and memory costs, which …

Dhp: Differentiable meta pruning via hypernetworks

Y Li, S Gu, K Zhang, L Van Gool, R Timofte - Computer Vision–ECCV 2020 …, 2020 - Springer
Network pruning has been the driving force for the acceleration of neural networks and the
alleviation of model storage/transmission burden. With the advent of AutoML and neural …

Continual learning with node-importance based adaptive group sparse regularization

S Jung, H Ahn, S Cha, T Moon - Advances in neural …, 2020 - proceedings.neurips.cc
We propose a novel regularization-based continual learning method, dubbed as Adaptive
Group Sparsity based Continual Learning (AGS-CL), using two group sparsity-based …

Exploiting kernel sparsity and entropy for interpretable CNN compression

Y Li, S Lin, B Zhang, J Liu… - Proceedings of the …, 2019 - openaccess.thecvf.com
Compressing convolutional neural networks (CNNs) has received ever-increasing research
focus. However, most existing CNN compression methods do not interpret their inherent …

Transformed ℓ1 regularization for learning sparse deep neural networks

R Ma, J Miao, L Niu, P Zhang - Neural Networks, 2019 - Elsevier
Abstract Deep Neural Networks (DNNs) have achieved extraordinary success in numerous
areas. However, DNNs often carry a large number of weight parameters, leading to the …

A survey of model compression strategies for object detection

Z Lyu, T Yu, F Pan, Y Zhang, J Luo, D Zhang… - Multimedia tools and …, 2024 - Springer
Deep neural networks (DNNs) have achieved great success in many object detection tasks.
However, such DNNS-based large object detection models are generally computationally …

Learning intrinsic sparse structures within long short-term memory

W Wen, Y He, S Rajbhandari, M Zhang… - arxiv preprint arxiv …, 2017 - arxiv.org
Model compression is significant for the wide adoption of Recurrent Neural Networks
(RNNs) in both user devices possessing limited resources and business clusters requiring …

Redundant feature pruning for accelerated inference in deep neural networks

BO Ayinde, T Inanc, JM Zurada - Neural Networks, 2019 - Elsevier
This paper presents an efficient technique to reduce the inference cost of deep and/or wide
convolutional neural network models by pruning redundant features (or filters). Previous …

DecomVQANet: Decomposing visual question answering deep network via tensor decomposition and regression

Z Bai, Y Li, M Woźniak, M Zhou, D Li - Pattern Recognition, 2021 - Elsevier
The model we developed is a novel comprehensive solution to compress and accelerate the
Visual Question Answering systems. In our algorithm Convolutional Neural Network is …