Sparsity in deep learning: Pruning and growth for efficient inference and training in neural networks

T Hoefler, D Alistarh, T Ben-Nun, N Dryden… - Journal of Machine …, 2021 - jmlr.org
The growing energy and performance costs of deep learning have driven the community to
reduce the size of neural networks by selectively pruning components. Similarly to their …

Sparse training via boosting pruning plasticity with neuroregeneration

S Liu, T Chen, X Chen, Z Atashgahi… - Advances in …, 2021 - proceedings.neurips.cc
Works on lottery ticket hypothesis (LTH) and single-shot network pruning (SNIP) have raised
a lot of attention currently on post-training pruning (iterative magnitude pruning), and before …

Head2toe: Utilizing intermediate representations for better transfer learning

U Evci, V Dumoulin, H Larochelle… - … on Machine Learning, 2022 - proceedings.mlr.press
Transfer-learning methods aim to improve performance in a data-scarce target domain using
a model pretrained on a data-rich source domain. A cost-efficient strategy, linear probing …

Do we actually need dense over-parameterization? in-time over-parameterization in sparse training

S Liu, L Yin, DC Mocanu… - … on Machine Learning, 2021 - proceedings.mlr.press
In this paper, we introduce a new perspective on training deep neural networks capable of
state-of-the-art performance without the need for the expensive over-parameterization by …

Efficient intrusion detection system in the cloud using fusion feature selection approaches and an ensemble classifier

M Bakro, RR Kumar, AA Alabrah, Z Ashraf, SK Bisoy… - Electronics, 2023 - mdpi.com
The application of cloud computing has increased tremendously in both public and private
organizations. However, attacks on cloud computing pose a serious threat to confidentiality …

Deep ensembling with no overhead for either training or testing: The all-round blessings of dynamic sparsity

S Liu, T Chen, Z Atashgahi, X Chen, G Sokar… - arxiv preprint arxiv …, 2021 - arxiv.org
The success of deep ensembles on improving predictive performance, uncertainty
estimation, and out-of-distribution robustness has been extensively studied in the machine …

Ten lessons we have learned in the new" sparseland": A short handbook for sparse neural network researchers

S Liu, Z Wang - arxiv preprint arxiv:2302.02596, 2023 - arxiv.org
This article does not propose any novel algorithm or new hardware for sparsity. Instead, it
aims to serve the" common good" for the increasingly prosperous Sparse Neural Network …

Dynamic sparse training for deep reinforcement learning

G Sokar, E Mocanu, DC Mocanu, M Pechenizkiy… - arxiv preprint arxiv …, 2021 - arxiv.org
Deep reinforcement learning (DRL) agents are trained through trial-and-error interactions
with the environment. This leads to a long training time for dense neural networks to achieve …

Where to pay attention in sparse training for feature selection?

G Sokar, Z Atashgahi, M Pechenizkiy… - Advances in Neural …, 2022 - proceedings.neurips.cc
A new line of research for feature selection based on neural networks has recently emerged.
Despite its superiority to classical methods, it requires many training iterations to converge …

Automatic noise filtering with dynamic sparse training in deep reinforcement learning

B Grooten, G Sokar, S Dohare, E Mocanu… - arxiv preprint arxiv …, 2023 - arxiv.org
Tomorrow's robots will need to distinguish useful information from noise when performing
different tasks. A household robot for instance may continuously receive a plethora of …