Recent advances of differential privacy in centralized deep learning: A systematic survey
Differential Privacy has become a widely popular method for data protection in machine
learning, especially since it allows formulating strict mathematical privacy guarantees. This …
learning, especially since it allows formulating strict mathematical privacy guarantees. This …
Privacy-Preserving Data-Driven Learning Models for Emerging Communication Networks: A Comprehensive Survey
With the proliferation of Beyond 5G (B5G) communication systems and heterogeneous
networks, mobile broadband users are generating massive volumes of data that undergo …
networks, mobile broadband users are generating massive volumes of data that undergo …
[HTML][HTML] Deep learning with gaussian differential privacy
Deep learning models are often trained on datasets that contain sensitive information such
as individuals' shop** transactions, personal contacts, and medical records. An …
as individuals' shop** transactions, personal contacts, and medical records. An …
Privacy protection in intelligent vehicle networking: A novel federated learning algorithm based on information fusion
Federated learning is an effective technique to solve the problem of information fusion and
information sharing in intelligent vehicle networking. However, most of the existing federated …
information sharing in intelligent vehicle networking. However, most of the existing federated …
Gradient leakage attack resilient deep learning
Gradient leakage attacks are considered one of the wickedest privacy threats in deep
learning as attackers covertly spy gradient updates during iterative training without …
learning as attackers covertly spy gradient updates during iterative training without …
DPSUR: accelerating differentially private stochastic gradient descent using selective update and release
Machine learning models are known to memorize private data to reduce their training loss,
which can be inadvertently exploited by privacy attacks such as model inversion and …
which can be inadvertently exploited by privacy attacks such as model inversion and …
[HTML][HTML] On the convergence and calibration of deep learning with differential privacy
Differentially private (DP) training preserves the data privacy usually at the cost of slower
convergence (and thus lower accuracy), as well as more severe mis-calibration than its non …
convergence (and thus lower accuracy), as well as more severe mis-calibration than its non …
Low-Cost High-Power Membership Inference Attacks
Membership inference attacks aim to detect if a particular data point was used in training a
model. We design a novel statistical test to perform robust membership inference attacks …
model. We design a novel statistical test to perform robust membership inference attacks …
Differential privacy in deep learning: Privacy and beyond
Motivated by the security risks of deep neural networks, such as various membership and
attribute inference attacks, differential privacy has emerged as a promising approach for …
attribute inference attacks, differential privacy has emerged as a promising approach for …
Scalable differential privacy with certified robustness in adversarial learning
In this paper, we aim to develop a scalable algorithm to preserve differential privacy (DP) in
adversarial learning for deep neural networks (DNNs), with certified robustness to …
adversarial learning for deep neural networks (DNNs), with certified robustness to …