Wild patterns reloaded: A survey of machine learning security against training data poisoning

AE Cinà, K Grosse, A Demontis, S Vascon… - ACM Computing …, 2023 - dl.acm.org
The success of machine learning is fueled by the increasing availability of computing power
and large training datasets. The training data is used to learn new models or update existing …

Energy-latency attacks via sponge poisoning

AE Cinà, A Demontis, B Biggio, F Roli, M Pelillo - Information Sciences, 2025 - Elsevier
Sponge examples are test-time inputs optimized to increase energy consumption and
prediction latency of deep networks deployed on hardware accelerators. By increasing the …

[HTML][HTML] Robust ML model ensembles via risk-driven anti-clustering of training data

L Mauri, B Apolloni, E Damiani - Information Sciences, 2023 - Elsevier
In this paper, we improve the robustness of Machine Learning (ML) classifiers against
training-time attacks by linking the risk of training data being tampered with to the …

Minimizing energy consumption of deep learning models by energy-aware training

D Lazzaro, AE Cinà, M Pintor, A Demontis… - … Conference on Image …, 2023 - Springer
Deep learning models undergo a significant increase in the number of parameters they
possess, leading to the execution of a larger number of operations during inference. This …

Backdoor learning curves: Explaining backdoor poisoning beyond influence functions

AE Cinà, K Grosse, S Vascon, A Demontis… - International Journal of …, 2024 - Springer
Backdoor attacks inject poisoning samples during training, with the goal of forcing a
machine learning model to output an attacker-chosen class when presented with a specific …

What distributions are robust to indiscriminate poisoning attacks for linear learners?

F Suya, X Zhang, Y Tian… - Advances in neural …, 2023 - proceedings.neurips.cc
We study indiscriminate poisoning for linear learners where an adversary injects a few
crafted examples into the training data with the goal of forcing the induced model to incur …

On the feasibility of adversarial machine learning in malware and network intrusion detection

A Venturi, C Zanasi - 2021 IEEE 20th International Symposium …, 2021 - ieeexplore.ieee.org
Nowadays, Machine Learning (ML) solutions are widely adopted in modern malware and
network intrusion detection systems. While these algorithms offer great performance, several …

Hardening RGB-D object recognition systems against adversarial patch attacks

Y Zheng, L Demetrio, AE Cinà, X Feng, Z **a… - Information …, 2023 - Elsevier
RGB-D object recognition systems improve their predictive performances by fusing color and
depth information, outperforming neural network architectures that rely solely on colors …

The Impact of Active Learning on Availability Data Poisoning for Android Malware Classifiers

S McFadden, M Kan, L Cavallaro… - Proceedings of the …, 2024 - kclpure.kcl.ac.uk
Can a poisoned machine learning (ML) model passively recover from its adversarial
manipulation by retraining with new samples, and regain non-poisoned performance? And if …

Empirical Perturbation Analysis of Linear System Solvers from a Data Poisoning Perspective

Y Liu, A Carr, L Sun - arxiv preprint arxiv:2410.00878, 2024 - arxiv.org
The perturbation analysis of linear solvers applied to systems arising broadly in machine
learning settings--for instance, when using linear regression models--establishes an …