Adversarial attacks and defenses on graphs

W **, Y Li, H Xu, Y Wang, S Ji, C Aggarwal… - ACM SIGKDD …, 2021 - dl.acm.org
Adversarial Attacks and Defenses on Graphs Page 1 Adversarial Attacks and Defenses on
Graphs: A Review, A Tool and Empirical Studies Wei **†, Yaxin Li†, Han Xu†, Yiqi Wang† …

A survey of adversarial learning on graphs

L Chen, J Li, J Peng, T **
X Cao, M Fang, J Liu, NZ Gong - arxiv preprint arxiv:2012.13995, 2020 - arxiv.org
Byzantine-robust federated learning aims to enable a service provider to learn an accurate
global model when a bounded number of clients are malicious. The key idea of existing …

Local model poisoning attacks to {Byzantine-Robust} federated learning

M Fang, X Cao, J Jia, N Gong - 29th USENIX security symposium …, 2020 - usenix.org
In federated learning, multiple client devices jointly learn a machine learning model: each
client device maintains a local model for its local training dataset, while a master device …

Kairos: Practical intrusion detection and investigation using whole-system provenance

Z Cheng, Q Lv, J Liang, Y Wang, D Sun… - … IEEE Symposium on …, 2024 - ieeexplore.ieee.org
Provenance graphs are structured audit logs that describe the history of a system's
execution. Recent studies have explored a variety of techniques to analyze provenance …

“real attackers don't compute gradients”: bridging the gap between adversarial ml research and practice

G Apruzzese, HS Anderson, S Dambra… - … IEEE Conference on …, 2023 - ieeexplore.ieee.org
Recent years have seen a proliferation of research on adversarial machine learning.
Numerous papers demonstrate powerful algorithmic attacks against a wide variety of …

Adversarial attack and defense on graph data: A survey

L Sun, Y Dou, C Yang, K Zhang, J Wang… - … on Knowledge and …, 2022 - ieeexplore.ieee.org
Deep neural networks (DNNs) have been widely applied to various applications, including
image classification, text generation, audio recognition, and graph data analysis. However …

Baffle: Backdoor detection via feedback-based federated learning

S Andreina, GA Marson, H Möllering… - 2021 IEEE 41st …, 2021 - ieeexplore.ieee.org
Recent studies have shown that federated learning (FL) is vulnerable to poisoning attacks
that inject a backdoor into the global model. These attacks are effective even when …

Backdoor attacks to graph neural networks

Z Zhang, J Jia, B Wang, NZ Gong - … of the 26th ACM Symposium on …, 2021 - dl.acm.org
In this work, we propose the first backdoor attack to graph neural networks (GNN).
Specifically, we propose a subgraph based backdoor attack to GNN for graph classification …