A survey on deep neural network pruning: Taxonomy, comparison, analysis, and recommendations

H Cheng, M Zhang, JQ Shi - IEEE Transactions on Pattern …, 2024 - ieeexplore.ieee.org
Modern deep neural networks, particularly recent large language models, come with
massive model sizes that require significant computational and storage resources. To …

Exploring the landscape of machine unlearning: A comprehensive survey and taxonomy

T Shaik, X Tao, H **e, L Li, X Zhu… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Machine unlearning (MU) is gaining increasing attention due to the need to remove or
modify predictions made by machine learning (ML) models. While training models have …

Spvit: Enabling faster vision transformers via latency-aware soft token pruning

Z Kong, P Dong, X Ma, X Meng, W Niu, M Sun… - European conference on …, 2022 - Springer
Abstract Recently, Vision Transformer (ViT) has continuously established new milestones in
the computer vision field, while the high computation and memory cost makes its …

Chex: Channel exploration for cnn model compression

Z Hou, M Qin, F Sun, X Ma, K Yuan… - Proceedings of the …, 2022 - openaccess.thecvf.com
Channel pruning has been broadly recognized as an effective technique to reduce the
computation and memory cost of deep convolutional neural networks. However …

Model sparsity can simplify machine unlearning

J Liu, P Ram, Y Yao, G Liu, Y Liu… - Advances in Neural …, 2024 - proceedings.neurips.cc
In response to recent data regulation requirements, machine unlearning (MU) has emerged
as a critical process to remove the influence of specific examples from a given model …

Federated dynamic sparse training: Computing less, communicating less, yet learning better

S Bibikar, H Vikalo, Z Wang, X Chen - Proceedings of the AAAI …, 2022 - ojs.aaai.org
Federated learning (FL) enables distribution of machine learning workloads from the cloud
to resource-limited edge devices. Unfortunately, current deep networks remain not only too …

Advancing model pruning via bi-level optimization

Y Zhang, Y Yao, P Ram, P Zhao… - Advances in …, 2022 - proceedings.neurips.cc
The deployment constraints in practical applications necessitate the pruning of large-scale
deep learning models, ie, promoting their weight sparsity. As illustrated by the Lottery Ticket …

Coarsening the granularity: Towards structurally sparse lottery tickets

T Chen, X Chen, X Ma, Y Wang… - … conference on machine …, 2022 - proceedings.mlr.press
The lottery ticket hypothesis (LTH) has shown that dense models contain highly sparse
subnetworks (ie, winning tickets) that can be trained in isolation to match full accuracy …

An Introduction to Bilevel Optimization: Foundations and applications in signal processing and machine learning

Y Zhang, P Khanduri, I Tsaknakis, Y Yao… - IEEE Signal …, 2024 - ieeexplore.ieee.org
Recently, bilevel optimization (BLO) has taken center stage in some very exciting
developments in the area of signal processing (SP) and machine learning (ML). Roughly …

Rare gems: Finding lottery tickets at initialization

K Sreenivasan, J Sohn, L Yang… - Advances in neural …, 2022 - proceedings.neurips.cc
Large neural networks can be pruned to a small fraction of their original size, with little loss
in accuracy, by following a time-consuming" train, prune, re-train" approach. Frankle & …