Privacy and fairness in Federated learning: on the perspective of Tradeoff
Federated learning (FL) has been a hot topic in recent years. Ever since it was introduced,
researchers have endeavored to devise FL systems that protect privacy or ensure fair …
researchers have endeavored to devise FL systems that protect privacy or ensure fair …
Differential privacy has bounded impact on fairness in classification
We theoretically study the impact of differential privacy on fairness in classification. We prove
that, given a class of models, popular group fairness measures are pointwise Lipschitz …
that, given a class of models, popular group fairness measures are pointwise Lipschitz …
Arbitrary decisions are a hidden cost of differentially private training
Mechanisms used in privacy-preserving machine learning often aim to guarantee differential
privacy (DP) during model training. Practical DP-ensuring training methods use …
privacy (DP) during model training. Practical DP-ensuring training methods use …
PILLAR: How to make semi-private learning more effective
In Semi-Supervised Semi-Private (SP) learning, the learner has access to both public
unlabelled and private labelled data. We propose PILLAR, an easy-to-implement and …
unlabelled and private labelled data. We propose PILLAR, an easy-to-implement and …
Unlocking accuracy and fairness in differentially private image classification
Privacy-preserving machine learning aims to train models on private data without leaking
sensitive information. Differential privacy (DP) is considered the gold standard framework for …
sensitive information. Differential privacy (DP) is considered the gold standard framework for …
Pre-trained perceptual features improve differentially private image generation
Training even moderately-sized generative models with differentially-private stochastic
gradient descent (DP-SGD) is difficult: the required level of noise for reasonable levels of …
gradient descent (DP-SGD) is difficult: the required level of noise for reasonable levels of …
Towards adversarial evaluations for inexact machine unlearning
Machine Learning models face increased concerns regarding the storage of personal user
data and adverse impacts of corrupted data like backdoors or systematic bias. Machine …
data and adverse impacts of corrupted data like backdoors or systematic bias. Machine …
Holistic survey of privacy and fairness in machine learning
Privacy and fairness are two crucial pillars of responsible Artificial Intelligence (AI) and
trustworthy Machine Learning (ML). Each objective has been independently studied in the …
trustworthy Machine Learning (ML). Each objective has been independently studied in the …
Privacy and Fairness in Machine Learning: A Survey
Privacy and fairness are two crucial pillars of responsible Artificial Intelligence (AI) and
trustworthy Machine Learning (ML). Each objective has been independently studied in the …
trustworthy Machine Learning (ML). Each objective has been independently studied in the …
A law of adversarial risk, interpolation, and label noise
In supervised learning, it has been shown that label noise in the data can be interpolated
without penalties on test accuracy. We show that interpolating label noise induces …
without penalties on test accuracy. We show that interpolating label noise induces …