Repvgg: Making vgg-style convnets great again

X Ding, X Zhang, N Ma, J Han… - Proceedings of the …, 2021 - openaccess.thecvf.com
We present a simple but powerful architecture of convolutional neural network, which has a
VGG-like inference-time body composed of nothing but a stack of 3x3 convolution and …

Sparsity in deep learning: Pruning and growth for efficient inference and training in neural networks

T Hoefler, D Alistarh, T Ben-Nun, N Dryden… - Journal of Machine …, 2021 - jmlr.org
The growing energy and performance costs of deep learning have driven the community to
reduce the size of neural networks by selectively pruning components. Similarly to their …

Revisiting random channel pruning for neural network compression

Y Li, K Adamczewski, W Li, S Gu… - Proceedings of the …, 2022 - openaccess.thecvf.com
Channel (or 3D filter) pruning serves as an effective way to accelerate the inference of
neural networks. There has been a flurry of algorithms that try to solve this practical problem …

Acnet: Strengthening the kernel skeletons for powerful cnn via asymmetric convolution blocks

X Ding, Y Guo, G Ding, J Han - Proceedings of the IEEE …, 2019 - openaccess.thecvf.com
Abstract As designing appropriate Convolutional Neural Network (CNN) architecture in the
context of a given application usually involves heavy human works or numerous GPU hours …

Chip: Channel independence-based pruning for compact neural networks

Y Sui, M Yin, Y **e, H Phan… - Advances in Neural …, 2021 - proceedings.neurips.cc
Filter pruning has been widely used for neural network compression because of its enabled
practical acceleration. To date, most of the existing filter pruning works explore the …

Group fisher pruning for practical network compression

L Liu, S Zhang, Z Kuang, A Zhou… - International …, 2021 - proceedings.mlr.press
Network compression has been widely studied since it is able to reduce the memory and
computation cost during inference. However, previous methods seldom deal with …