Privacy and fairness in Federated learning: on the perspective of Tradeoff

H Chen, T Zhu, T Zhang, W Zhou, PS Yu - ACM Computing Surveys, 2023 - dl.acm.org
Federated learning (FL) has been a hot topic in recent years. Ever since it was introduced,
researchers have endeavored to devise FL systems that protect privacy or ensure fair …

Differential privacy has bounded impact on fairness in classification

P Mangold, M Perrot, A Bellet… - … on Machine Learning, 2023 - proceedings.mlr.press
We theoretically study the impact of differential privacy on fairness in classification. We prove
that, given a class of models, popular group fairness measures are pointwise Lipschitz …

Arbitrary decisions are a hidden cost of differentially private training

B Kulynych, H Hsu, C Troncoso… - Proceedings of the 2023 …, 2023 - dl.acm.org
Mechanisms used in privacy-preserving machine learning often aim to guarantee differential
privacy (DP) during model training. Practical DP-ensuring training methods use …

PILLAR: How to make semi-private learning more effective

F Pinto, Y Hu, F Yang, A Sanyal - 2024 IEEE Conference on …, 2024 - ieeexplore.ieee.org
In Semi-Supervised Semi-Private (SP) learning, the learner has access to both public
unlabelled and private labelled data. We propose PILLAR, an easy-to-implement and …

Unlocking accuracy and fairness in differentially private image classification

L Berrada, S De, JH Shen, J Hayes, R Stanforth… - arxiv preprint arxiv …, 2023 - arxiv.org
Privacy-preserving machine learning aims to train models on private data without leaking
sensitive information. Differential privacy (DP) is considered the gold standard framework for …

Pre-trained perceptual features improve differentially private image generation

F Harder, MJ Asadabadi, DJ Sutherland… - arxiv preprint arxiv …, 2022 - arxiv.org
Training even moderately-sized generative models with differentially-private stochastic
gradient descent (DP-SGD) is difficult: the required level of noise for reasonable levels of …

Towards adversarial evaluations for inexact machine unlearning

S Goel, A Prabhu, A Sanyal, SN Lim, P Torr… - arxiv preprint arxiv …, 2022 - arxiv.org
Machine Learning models face increased concerns regarding the storage of personal user
data and adverse impacts of corrupted data like backdoors or systematic bias. Machine …

Holistic survey of privacy and fairness in machine learning

S Shaham, A Hajisafi, MK Quan, DC Nguyen… - arxiv preprint arxiv …, 2023 - arxiv.org
Privacy and fairness are two crucial pillars of responsible Artificial Intelligence (AI) and
trustworthy Machine Learning (ML). Each objective has been independently studied in the …

Privacy and Fairness in Machine Learning: A Survey

S Shaham, A Hajisafi, MK Quan… - IEEE Transactions …, 2025 - ieeexplore.ieee.org
Privacy and fairness are two crucial pillars of responsible Artificial Intelligence (AI) and
trustworthy Machine Learning (ML). Each objective has been independently studied in the …

A law of adversarial risk, interpolation, and label noise

D Paleka, A Sanyal - arxiv preprint arxiv:2207.03933, 2022 - arxiv.org
In supervised learning, it has been shown that label noise in the data can be interpolated
without penalties on test accuracy. We show that interpolating label noise induces …