Fairness in machine learning: A survey

S Caton, C Haas - ACM Computing Surveys, 2024 - dl.acm.org
When Machine Learning technologies are used in contexts that affect citizens, companies as
well as researchers need to be confident that there will not be any unexpected social …

Bias mitigation for machine learning classifiers: A comprehensive survey

M Hort, Z Chen, JM Zhang, M Harman… - ACM Journal on …, 2024 - dl.acm.org
This article provides a comprehensive survey of bias mitigation methods for achieving
fairness in Machine Learning (ML) models. We collect a total of 341 publications concerning …

Data collection and quality challenges in deep learning: A data-centric ai perspective

SE Whang, Y Roh, H Song, JG Lee - The VLDB Journal, 2023 - Springer
Data-centric AI is at the center of a fundamental shift in software engineering where machine
learning becomes the new software, powered by big data and computing infrastructure …

Lift: Language-interfaced fine-tuning for non-language machine learning tasks

T Dinh, Y Zeng, R Zhang, Z Lin… - Advances in …, 2022 - proceedings.neurips.cc
Fine-tuning pretrained language models (LMs) without making any architectural changes
has become a norm for learning various language downstream tasks. However, for non …

Fairfed: Enabling group fairness in federated learning

YH Ezzeldin, S Yan, C He, E Ferrara… - Proceedings of the AAAI …, 2023 - ojs.aaai.org
Training ML models which are fair across different demographic groups is of critical
importance due to the increased integration of ML in crucial decision-making scenarios such …

In-processing modeling techniques for machine learning fairness: A survey

M Wan, D Zha, N Liu, N Zou - ACM Transactions on Knowledge …, 2023 - dl.acm.org
Machine learning models are becoming pervasive in high-stakes applications. Despite their
clear benefits in terms of performance, the models could show discrimination against …

Sample selection for fair and robust training

Y Roh, K Lee, S Whang, C Suh - Advances in Neural …, 2021 - proceedings.neurips.cc
Fairness and robustness are critical elements of Trustworthy AI that need to be addressed
together. Fairness is about learning an unbiased model while robustness is about learning …

Fairness without demographics through knowledge distillation

J Chai, T Jang, X Wang - Advances in Neural Information …, 2022 - proceedings.neurips.cc
Most of existing work on fairness assumes available demographic information in the training
set. In practice, due to legal or privacy concerns, when demographic information is not …

Fairly adaptive negative sampling for recommendations

X Chen, W Fan, J Chen, H Liu, Z Liu, Z Zhang… - Proceedings of the ACM …, 2023 - dl.acm.org
Pairwise learning strategies are prevalent for optimizing recommendation models on implicit
feedback data, which usually learns user preference by discriminating between positive (ie …

Improving fairness via federated learning

Y Zeng, H Chen, K Lee - arxiv preprint arxiv:2110.15545, 2021 - arxiv.org
Recently, lots of algorithms have been proposed for learning a fair classifier from
decentralized data. However, many theoretical and algorithmic questions remain open. First …