Trustworthy graph neural networks: Aspects, methods, and trends

H Zhang, B Wu, X Yuan, S Pan, H Tong… - Proceedings of the …, 2024 - ieeexplore.ieee.org
Graph neural networks (GNNs) have emerged as a series of competent graph learning
methods for diverse real-world scenarios, ranging from daily applications such as …

Privacy-Preserving Data-Driven Learning Models for Emerging Communication Networks: A Comprehensive Survey

MM Fouda, ZM Fadlullah, MI Ibrahem… - … Surveys & Tutorials, 2024 - ieeexplore.ieee.org
With the proliferation of Beyond 5G (B5G) communication systems and heterogeneous
networks, mobile broadband users are generating massive volumes of data that undergo …

A survey and guideline on privacy enhancing technologies for collaborative machine learning

EU Soykan, L Karaçay, F Karakoç, E Tomur - Ieee access, 2022 - ieeexplore.ieee.org
As machine learning and artificial intelligence (ML/AI) are becoming more popular and
advanced, there is a wish to turn sensitive data into valuable information via ML/AI …

Privacy-preserving and fairness-aware federated learning for critical infrastructure protection and resilience

Y Zhang, R Sun, L Shen, G Bai, M Xue… - Proceedings of the …, 2024 - dl.acm.org
The energy industry is undergoing significant transformations as it strives to achieve net-
zero emissions and future-proof its infrastructure, where every participant in the power grid …

Local differential privacy for federated learning

MAP Chamikara, D Liu, S Camtepe, S Nepal… - arxiv preprint arxiv …, 2022 - arxiv.org
Advanced adversarial attacks such as membership inference and model memorization can
make federated learning (FL) vulnerable and potentially leak sensitive private data. Local …

Exploiting data sparsity in secure cross-platform social recommendation

J Cui, C Chen, L Lyu, C Yang… - Advances in Neural …, 2021 - proceedings.neurips.cc
Social recommendation has shown promising improvements over traditional systems since it
leverages social correlation data as an additional input. Most existing work assumes that all …

AgrEvader: Poisoning membership inference against Byzantine-robust federated learning

Y Zhang, G Bai, MAP Chamikara, M Ma… - Proceedings of the …, 2023 - dl.acm.org
The Poisoning Membership Inference Attack (PMIA) is a newly emerging privacy attack that
poses a significant threat to federated learning (FL). An adversary conducts data poisoning …

Benchmarking robustness and privacy-preserving methods in federated learning

Z Alebouyeh, AJ Bidgoly - Future Generation Computer Systems, 2024 - Elsevier
Federated learning (FL) is a machine learning framework that enables the use of user data
for training without the need to share the data with the central server. FL's decentralized …

Citadel: Protecting data privacy and model confidentiality for collaborative learning

C Zhang, J **a, B Yang, H Puyang, W Wang… - Proceedings of the …, 2021 - dl.acm.org
Many organizations own data but have limited machine learning expertise (data owners). On
the other hand, organizations that have expertise need data from diverse sources to train …

Towards efficient synchronous federated training: A survey on system optimization strategies

Z Jiang, W Wang, B Li, Q Yang - IEEE Transactions on Big Data, 2022 - ieeexplore.ieee.org
The increasing demand for privacy-preserving collaborative learning has given rise to a new
computing paradigm called federated learning (FL), in which clients collaboratively train a …