Communication-efficient federated learning via personalized filter pruning
Q Min, F Luo, W Dong, C Gu, W Ding - Information Sciences, 2024 - Elsevier
With the popularity of mobile devices and the continuous growth of interactive data, FL
(Federated Learning) has gradually become an effective mean to address the problems of …
(Federated Learning) has gradually become an effective mean to address the problems of …
Semantic-driven automatic filter pruning for neural networks
Y Guo, W Gao - … conference on multimedia and expo (ICME), 2022 - ieeexplore.ieee.org
Filter pruning is widely used for neural network compression. However, existing methods
mostly judge importance of filters by measuring magnitudes or distributions of data in weight …
mostly judge importance of filters by measuring magnitudes or distributions of data in weight …
Asymptotic soft cluster pruning for deep neural networks
T Niu, Y Teng, P Zou - arxiv preprint arxiv:2206.08186, 2022 - arxiv.org
Filter pruning method introduces structural sparsity by removing selected filters and is thus
particularly effective for reducing complexity. Previous works empirically prune networks …
particularly effective for reducing complexity. Previous works empirically prune networks …
A novel channel pruning approach based on local attention and global ranking for cnn model compression
W Lu, Y Jiang, P **g, J Chu… - 2023 IEEE International …, 2023 - ieeexplore.ieee.org
Channel pruning facilitates the acceleration and deployment of convolutional neural
networks on resource-constrained devices. Nevertheless, existing related methods mainly …
networks on resource-constrained devices. Nevertheless, existing related methods mainly …
Surround the Nonlinearity: Inserting Foldable Convolutional Autoencoders to Reduce Activation Footprint
Modern deep learning architectures, while highly successful, are characterized by
substantial computational and memory demands due to their large number of parameters or …
substantial computational and memory demands due to their large number of parameters or …
Shift Pruning: Equivalent Weight Pruning for CNN via Differentiable Shift Operator
Weight pruning is a well-known technique used for network compression. In contrast to filter
pruning, weight pruning produces higher compression ratios as it is more fine-grained …
pruning, weight pruning produces higher compression ratios as it is more fine-grained …
Cluster, Reconstruct and Prune: Equivalent Filter Pruning for CNNs without Fine-Tuning
T Niu, Y Teng, P Zou, Y Liu - 2023 IEEE Symposium on …, 2023 - ieeexplore.ieee.org
Network pruning is effective in reducing memory usage and time complexity. However,
current approaches face two common limitations. 1) Pruned filters cannot contribute to the …
current approaches face two common limitations. 1) Pruned filters cannot contribute to the …
A Hybrid Filter Pruning Method Based on Linear Region Analysis
CH Hsieh, JC Yang, HY Lin, LJ Kuo… - 2023 IEEE 22nd …, 2023 - ieeexplore.ieee.org
This study proposes a hybrid filter pruning method based on linear region analysis. Our
approach combines the advantages of cluster pruning and norm-based filter pruning by …
approach combines the advantages of cluster pruning and norm-based filter pruning by …
Convolution Kernel Pruning Algorithm Based on Average Percentage of Zeros and Data Distribution Similarity
X Li, J Gong, H Lv, J Wen, K Liu… - 2024 IEEE International …, 2024 - ieeexplore.ieee.org
Pruning convolutional kernels is a crucial method for achieving model lightweighting.
However, current APoZ (Average Percentage of Zeros) based pruning algorithms often …
However, current APoZ (Average Percentage of Zeros) based pruning algorithms often …