Provable Robustness of (Graph) Neural Networks Against Data Poisoning and Backdoor Attacks
Generalization of machine learning models can be severely compromised by data
poisoning, where adversarial changes are applied to the training data. This vulnerability has …
poisoning, where adversarial changes are applied to the training data. This vulnerability has …
Enhancing Model Poisoning Attacks to Byzantine-Robust Federated Learning via Critical Learning Periods
Most existing model poisoning attacks in federated learning (FL) control a set of malicious
clients and share a fixed number of malicious gradients with the server in each FL training …
clients and share a fixed number of malicious gradients with the server in each FL training …