Recent advances of differential privacy in centralized deep learning: A systematic survey

L Demelius, R Kern, A Trügler - ACM Computing Surveys, 2023 - dl.acm.org
Differential Privacy has become a widely popular method for data protection in machine
learning, especially since it allows formulating strict mathematical privacy guarantees. This …

Privacy-Preserving Data-Driven Learning Models for Emerging Communication Networks: A Comprehensive Survey

MM Fouda, ZM Fadlullah, MI Ibrahem… - … Surveys & Tutorials, 2024 - ieeexplore.ieee.org
With the proliferation of Beyond 5G (B5G) communication systems and heterogeneous
networks, mobile broadband users are generating massive volumes of data that undergo …

[HTML][HTML] Deep learning with gaussian differential privacy

Z Bu, J Dong, Q Long, WJ Su - Harvard data science review, 2020 - ncbi.nlm.nih.gov
Deep learning models are often trained on datasets that contain sensitive information such
as individuals' shop** transactions, personal contacts, and medical records. An …

Privacy protection in intelligent vehicle networking: A novel federated learning algorithm based on information fusion

Z Qu, Y Tang, G Muhammad, P Tiwari - Information Fusion, 2023 - Elsevier
Federated learning is an effective technique to solve the problem of information fusion and
information sharing in intelligent vehicle networking. However, most of the existing federated …

Gradient leakage attack resilient deep learning

W Wei, L Liu - IEEE Transactions on Information Forensics and …, 2021 - ieeexplore.ieee.org
Gradient leakage attacks are considered one of the wickedest privacy threats in deep
learning as attackers covertly spy gradient updates during iterative training without …

DPSUR: accelerating differentially private stochastic gradient descent using selective update and release

J Fu, Q Ye, H Hu, Z Chen, L Wang, K Wang… - arxiv preprint arxiv …, 2023 - arxiv.org
Machine learning models are known to memorize private data to reduce their training loss,
which can be inadvertently exploited by privacy attacks such as model inversion and …

[HTML][HTML] On the convergence and calibration of deep learning with differential privacy

Z Bu, H Wang, Z Dai, Q Long - Transactions on machine learning …, 2023 - ncbi.nlm.nih.gov
Differentially private (DP) training preserves the data privacy usually at the cost of slower
convergence (and thus lower accuracy), as well as more severe mis-calibration than its non …

Low-Cost High-Power Membership Inference Attacks

S Zarifzadeh, P Liu, R Shokri - Forty-first International Conference on …, 2024 - openreview.net
Membership inference attacks aim to detect if a particular data point was used in training a
model. We design a novel statistical test to perform robust membership inference attacks …

Differential privacy in deep learning: Privacy and beyond

Y Wang, Q Wang, L Zhao, C Wang - Future Generation Computer Systems, 2023 - Elsevier
Motivated by the security risks of deep neural networks, such as various membership and
attribute inference attacks, differential privacy has emerged as a promising approach for …

Scalable differential privacy with certified robustness in adversarial learning

H Phan, MT Thai, H Hu, R **… - … on Machine Learning, 2020 - proceedings.mlr.press
In this paper, we aim to develop a scalable algorithm to preserve differential privacy (DP) in
adversarial learning for deep neural networks (DNNs), with certified robustness to …