A comprehensive survey on trustworthy graph neural networks: Privacy, robustness, fairness, and explainability

E Dai, T Zhao, H Zhu, J Xu, Z Guo, H Liu, J Tang… - Machine Intelligence …, 2024 - Springer
Graph neural networks (GNNs) have made rapid developments in the recent years. Due to
their great ability in modeling graph-structured data, GNNs are vastly used in various …

A survey of privacy attacks in machine learning

M Rigaki, S Garcia - ACM Computing Surveys, 2023 - dl.acm.org
As machine learning becomes more widely used, the need to study its implications in
security and privacy becomes more urgent. Although the body of work in privacy has been …

Trustworthy graph neural networks: Aspects, methods and trends

H Zhang, B Wu, X Yuan, S Pan, H Tong… - arxiv preprint arxiv …, 2022 - arxiv.org
Graph neural networks (GNNs) have emerged as a series of competent graph learning
methods for diverse real-world scenarios, ranging from daily applications like …

Model stealing attacks against inductive graph neural networks

Y Shen, X He, Y Han, Y Zhang - 2022 IEEE Symposium on …, 2022 - ieeexplore.ieee.org
Many real-world data come in the form of graphs. Graph neural networks (GNNs), a new
family of machine learning (ML) models, have been proposed to fully leverage graph data to …

SoK: Let the privacy games begin! A unified treatment of data inference privacy in machine learning

A Salem, G Cherubin, D Evans, B Köpf… - … IEEE Symposium on …, 2023 - ieeexplore.ieee.org
Deploying machine learning models in production may allow adversaries to infer sensitive
information about training data. There is a vast literature analyzing different types of …

Adapting membership inference attacks to GNN for graph classification: Approaches and implications

B Wu, X Yang, S Pan, X Yuan - 2021 IEEE International …, 2021 - ieeexplore.ieee.org
In light of the wide application of Graph Neural Networks (GNNs), Membership Inference
Attack (MIA) against GNNs raises severe privacy concerns, where training data can be …

{GAP}: Differentially Private Graph Neural Networks with Aggregation Perturbation

S Sajadmanesh, AS Shamsabadi, A Bellet… - 32nd USENIX Security …, 2023 - usenix.org
In this paper, we study the problem of learning Graph Neural Networks (GNNs) with
Differential Privacy (DP). We propose a novel differentially private GNN based on …

A comprehensive survey on trustworthy recommender systems

W Fan, X Zhao, X Chen, J Su, J Gao, L Wang… - arxiv preprint arxiv …, 2022 - arxiv.org
As one of the most successful AI-powered applications, recommender systems aim to help
people make appropriate decisions in an effective and efficient way, by providing …

Privacy leakage on dnns: A survey of model inversion attacks and defenses

H Fang, Y Qiu, H Yu, W Yu, J Kong, B Chong… - arxiv preprint arxiv …, 2024 - arxiv.org
Deep Neural Networks (DNNs) have revolutionized various domains with their exceptional
performance across numerous applications. However, Model Inversion (MI) attacks, which …

SNAP: Efficient extraction of private properties with poisoning

H Chaudhari, J Abascal, A Oprea… - … IEEE Symposium on …, 2023 - ieeexplore.ieee.org
Property inference attacks allow an adversary to extract global properties of the training
dataset from a machine learning model. Such attacks have privacy implications for data …