Compressing explicit voxel grid representations: fast nerfs become also small

CL Deng, E Tartaglione - Proceedings of the IEEE/CVF …, 2023 - openaccess.thecvf.com
NeRFs have revolutionized the world of per-scene radiance field reconstruction because of
their intrinsic compactness. One of the main limitations of NeRFs is their slow rendering …

Loss-based sensitivity regularization: towards deep sparse neural networks

E Tartaglione, A Bragagnolo, A Fiandrotti, M Grangetto - Neural Networks, 2022 - Elsevier
Abstract LOBSTER (LOss-Based SensiTivity rEgulaRization) is a method for training neural
networks having a sparse topology. Let the sensitivity of a network parameter be the …

The simpler the better: An entropy-based importance metric to reduce neural networks' depth

V Quétu, Z Liao, E Tartaglione - Joint European Conference on Machine …, 2024 - Springer
While deep neural networks are highly effective at solving complex tasks, large pre-trained
models are commonly employed even to solve consistently simpler downstream tasks …

[HTML][HTML] Simplify: A Python library for optimizing pruned neural networks

A Bragagnolo, CA Barbano - SoftwareX, 2022 - Elsevier
Neural network pruning allows for impressive theoretical reduction of models sizes and
complexity. However it usually offers little practical benefits as it is most often limited to just …

SecureEI: Proactive intellectual property protection of AI models for edge intelligence

P Li, J Huang, S Zhang, C Qi - Computer Networks, 2024 - Elsevier
Deploying AI models on edge computing platforms enhances real-time performance,
reduces network dependency, and ensures data privacy on terminal devices. However …

To update or not to update? neurons at equilibrium in deep models

A Bragagnolo, E Tartaglione… - Advances in neural …, 2022 - proceedings.neurips.cc
Recent advances in deep learning optimization showed that, with some a-posteriori
information on fully-trained models, it is possible to match the same performance by simply …

Playing the lottery with concave regularizers for sparse trainable neural networks

G Fracastoro, SM Fosson, A Migliorati… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
The design of sparse neural networks, ie, of networks with a reduced number of parameters,
has been attracting increasing research attention in the last few years. The use of sparse …

On the role of structured pruning for neural network compression

A Bragagnolo, E Tartaglione… - … on Image Processing …, 2021 - ieeexplore.ieee.org
This works explores the benefits of structured parameter pruning in the framework of the
MPEG standardization efforts for neural network compression. First less relevant parameters …

[PDF][PDF] Lightweight Federated Learning for Efficient Network Intrusion Detection

A Bouayad, H Alami, M Janati Idrissi, I Berrada - IEEE Access, 2024 - janati.me
ABSTRACT Network Intrusion Detection Systems (NIDS) play a crucial role in ensuring
cybersecurity across various digital infrastructures. However, traditional NIDS face …

Inshrinkerator: Compressing Deep Learning Training Checkpoints via Dynamic Quantization

A Agrawal, S Reddy, S Bhattamishra… - Proceedings of the …, 2024 - dl.acm.org
The likelihood of encountering in-training failures rises substantially with larger Deep
Learning (DL) training workloads, leading to lost work and resource wastage. Such failures …