A survey on deep neural network pruning: Taxonomy, comparison, analysis, and recommendations
Modern deep neural networks, particularly recent large language models, come with
massive model sizes that require significant computational and storage resources. To …
massive model sizes that require significant computational and storage resources. To …
An overview of deep learning methods for multimodal medical data mining
Deep learning methods have achieved significant results in various fields. Due to the
success of these methods, many researchers have used deep learning algorithms in …
success of these methods, many researchers have used deep learning algorithms in …
Eagles: Efficient accelerated 3d gaussians with lightweight encodings
Abstract Recently, 3D Gaussian splatting (3D-GS) has gained popularity in novel-view
scene synthesis. It addresses the challenges of lengthy training times and slow rendering …
scene synthesis. It addresses the challenges of lengthy training times and slow rendering …
Pruning neural networks without any data by iteratively conserving synaptic flow
Pruning the parameters of deep neural networks has generated intense interest due to
potential savings in time, memory and energy both during training and at test time. Recent …
potential savings in time, memory and energy both during training and at test time. Recent …
Parameter-efficient fine-tuning methods for pretrained language models: A critical review and assessment
With the continuous growth in the number of parameters of transformer-based pretrained
language models (PLMs), particularly the emergence of large language models (LLMs) with …
language models (PLMs), particularly the emergence of large language models (LLMs) with …
Sparse training via boosting pruning plasticity with neuroregeneration
Works on lottery ticket hypothesis (LTH) and single-shot network pruning (SNIP) have raised
a lot of attention currently on post-training pruning (iterative magnitude pruning), and before …
a lot of attention currently on post-training pruning (iterative magnitude pruning), and before …
Do we actually need dense over-parameterization? in-time over-parameterization in sparse training
In this paper, we introduce a new perspective on training deep neural networks capable of
state-of-the-art performance without the need for the expensive over-parameterization by …
state-of-the-art performance without the need for the expensive over-parameterization by …
Can subnetwork structure be the key to out-of-distribution generalization?
Can models with particular structure avoid being biased towards spurious correlation in out-
of-distribution (OOD) generalization? Peters et al.(2016) provides a positive answer for …
of-distribution (OOD) generalization? Peters et al.(2016) provides a positive answer for …
Towards provably efficient quantum algorithms for large-scale machine-learning models
Large machine learning models are revolutionary technologies of artificial intelligence
whose bottlenecks include huge computational expenses, power, and time used both in the …
whose bottlenecks include huge computational expenses, power, and time used both in the …
Model sparsity can simplify machine unlearning
In response to recent data regulation requirements, machine unlearning (MU) has emerged
as a critical process to remove the influence of specific examples from a given model …
as a critical process to remove the influence of specific examples from a given model …