Provable Robustness of (Graph) Neural Networks Against Data Poisoning and Backdoor Attacks

L Gosch, M Sabanayagam, D Ghoshdastidar… - arxiv preprint arxiv …, 2024 - arxiv.org
Generalization of machine learning models can be severely compromised by data
poisoning, where adversarial changes are applied to the training data. This vulnerability has …

Enhancing Model Poisoning Attacks to Byzantine-Robust Federated Learning via Critical Learning Periods

G Yan, H Wang, X Yuan, J Li - … of the 27th International Symposium on …, 2024 - dl.acm.org
Most existing model poisoning attacks in federated learning (FL) control a set of malicious
clients and share a fixed number of malicious gradients with the server in each FL training …