Robust training under label noise by over-parameterization

S Liu, Z Zhu, Q Qu, C You - International Conference on …, 2022 - proceedings.mlr.press
Recently, over-parameterized deep networks, with increasingly more network parameters
than training samples, have dominated the performances of modern machine learning …

Exploring lottery ticket hypothesis in spiking neural networks

Y Kim, Y Li, H Park, Y Venkatesha, R Yin… - European Conference on …, 2022 - Springer
Abstract Spiking Neural Networks (SNNs) have recently emerged as a new generation of
low-power deep neural networks, which is suitable to be implemented on low-power …

The lazy neuron phenomenon: On emergence of activation sparsity in transformers

Z Li, C You, S Bhojanapalli, D Li, AS Rawat… - ar** resource-efficient machine learning
applications. We propose the novel and powerful sparse learning method Adaptive …

Ten lessons we have learned in the new" sparseland": A short handbook for sparse neural network researchers

S Liu, Z Wang - arxiv preprint arxiv:2302.02596, 2023 - arxiv.org
This article does not propose any novel algorithm or new hardware for sparsity. Instead, it
aims to serve the" common good" for the increasingly prosperous Sparse Neural Network …

Visual prompting upgrades neural network sparsification: A data-model perspective

C **, T Huang, Y Zhang, M Pechenizkiy, S Liu… - arxiv preprint arxiv …, 2023 - arxiv.org
The rapid development of large-scale deep learning models questions the affordability of
hardware platforms, which necessitates the pruning to reduce their computational and …