A comprehensive survey on trustworthy graph neural networks: Privacy, robustness, fairness, and explainability
Graph neural networks (GNNs) have made rapid developments in the recent years. Due to
their great ability in modeling graph-structured data, GNNs are vastly used in various …
their great ability in modeling graph-structured data, GNNs are vastly used in various …
A survey of privacy attacks in machine learning
As machine learning becomes more widely used, the need to study its implications in
security and privacy becomes more urgent. Although the body of work in privacy has been …
security and privacy becomes more urgent. Although the body of work in privacy has been …
Trustworthy graph neural networks: Aspects, methods and trends
Graph neural networks (GNNs) have emerged as a series of competent graph learning
methods for diverse real-world scenarios, ranging from daily applications like …
methods for diverse real-world scenarios, ranging from daily applications like …
Model stealing attacks against inductive graph neural networks
Many real-world data come in the form of graphs. Graph neural networks (GNNs), a new
family of machine learning (ML) models, have been proposed to fully leverage graph data to …
family of machine learning (ML) models, have been proposed to fully leverage graph data to …
SoK: Let the privacy games begin! A unified treatment of data inference privacy in machine learning
Deploying machine learning models in production may allow adversaries to infer sensitive
information about training data. There is a vast literature analyzing different types of …
information about training data. There is a vast literature analyzing different types of …
Adapting membership inference attacks to GNN for graph classification: Approaches and implications
In light of the wide application of Graph Neural Networks (GNNs), Membership Inference
Attack (MIA) against GNNs raises severe privacy concerns, where training data can be …
Attack (MIA) against GNNs raises severe privacy concerns, where training data can be …
{GAP}: Differentially Private Graph Neural Networks with Aggregation Perturbation
In this paper, we study the problem of learning Graph Neural Networks (GNNs) with
Differential Privacy (DP). We propose a novel differentially private GNN based on …
Differential Privacy (DP). We propose a novel differentially private GNN based on …
A comprehensive survey on trustworthy recommender systems
As one of the most successful AI-powered applications, recommender systems aim to help
people make appropriate decisions in an effective and efficient way, by providing …
people make appropriate decisions in an effective and efficient way, by providing …
Privacy leakage on dnns: A survey of model inversion attacks and defenses
Deep Neural Networks (DNNs) have revolutionized various domains with their exceptional
performance across numerous applications. However, Model Inversion (MI) attacks, which …
performance across numerous applications. However, Model Inversion (MI) attacks, which …
SNAP: Efficient extraction of private properties with poisoning
Property inference attacks allow an adversary to extract global properties of the training
dataset from a machine learning model. Such attacks have privacy implications for data …
dataset from a machine learning model. Such attacks have privacy implications for data …