Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Wild patterns reloaded: A survey of machine learning security against training data poisoning
The success of machine learning is fueled by the increasing availability of computing power
and large training datasets. The training data is used to learn new models or update existing …
and large training datasets. The training data is used to learn new models or update existing …
Energy-latency attacks via sponge poisoning
Sponge examples are test-time inputs optimized to increase energy consumption and
prediction latency of deep networks deployed on hardware accelerators. By increasing the …
prediction latency of deep networks deployed on hardware accelerators. By increasing the …
[HTML][HTML] Robust ML model ensembles via risk-driven anti-clustering of training data
In this paper, we improve the robustness of Machine Learning (ML) classifiers against
training-time attacks by linking the risk of training data being tampered with to the …
training-time attacks by linking the risk of training data being tampered with to the …
Minimizing energy consumption of deep learning models by energy-aware training
Deep learning models undergo a significant increase in the number of parameters they
possess, leading to the execution of a larger number of operations during inference. This …
possess, leading to the execution of a larger number of operations during inference. This …
Backdoor learning curves: Explaining backdoor poisoning beyond influence functions
Backdoor attacks inject poisoning samples during training, with the goal of forcing a
machine learning model to output an attacker-chosen class when presented with a specific …
machine learning model to output an attacker-chosen class when presented with a specific …
What distributions are robust to indiscriminate poisoning attacks for linear learners?
We study indiscriminate poisoning for linear learners where an adversary injects a few
crafted examples into the training data with the goal of forcing the induced model to incur …
crafted examples into the training data with the goal of forcing the induced model to incur …
On the feasibility of adversarial machine learning in malware and network intrusion detection
Nowadays, Machine Learning (ML) solutions are widely adopted in modern malware and
network intrusion detection systems. While these algorithms offer great performance, several …
network intrusion detection systems. While these algorithms offer great performance, several …
Hardening RGB-D object recognition systems against adversarial patch attacks
RGB-D object recognition systems improve their predictive performances by fusing color and
depth information, outperforming neural network architectures that rely solely on colors …
depth information, outperforming neural network architectures that rely solely on colors …
The Impact of Active Learning on Availability Data Poisoning for Android Malware Classifiers
Can a poisoned machine learning (ML) model passively recover from its adversarial
manipulation by retraining with new samples, and regain non-poisoned performance? And if …
manipulation by retraining with new samples, and regain non-poisoned performance? And if …
Empirical Perturbation Analysis of Linear System Solvers from a Data Poisoning Perspective
The perturbation analysis of linear solvers applied to systems arising broadly in machine
learning settings--for instance, when using linear regression models--establishes an …
learning settings--for instance, when using linear regression models--establishes an …