Repvgg: Making vgg-style convnets great again
We present a simple but powerful architecture of convolutional neural network, which has a
VGG-like inference-time body composed of nothing but a stack of 3x3 convolution and …
VGG-like inference-time body composed of nothing but a stack of 3x3 convolution and …
Sparsity in deep learning: Pruning and growth for efficient inference and training in neural networks
The growing energy and performance costs of deep learning have driven the community to
reduce the size of neural networks by selectively pruning components. Similarly to their …
reduce the size of neural networks by selectively pruning components. Similarly to their …
Revisiting random channel pruning for neural network compression
Channel (or 3D filter) pruning serves as an effective way to accelerate the inference of
neural networks. There has been a flurry of algorithms that try to solve this practical problem …
neural networks. There has been a flurry of algorithms that try to solve this practical problem …
Acnet: Strengthening the kernel skeletons for powerful cnn via asymmetric convolution blocks
Abstract As designing appropriate Convolutional Neural Network (CNN) architecture in the
context of a given application usually involves heavy human works or numerous GPU hours …
context of a given application usually involves heavy human works or numerous GPU hours …
Chip: Channel independence-based pruning for compact neural networks
Filter pruning has been widely used for neural network compression because of its enabled
practical acceleration. To date, most of the existing filter pruning works explore the …
practical acceleration. To date, most of the existing filter pruning works explore the …
Group fisher pruning for practical network compression
Network compression has been widely studied since it is able to reduce the memory and
computation cost during inference. However, previous methods seldom deal with …
computation cost during inference. However, previous methods seldom deal with …