Bias mitigation for machine learning classifiers: A comprehensive survey

M Hort, Z Chen, JM Zhang, M Harman… - ACM Journal on …, 2024 - dl.acm.org
This article provides a comprehensive survey of bias mitigation methods for achieving
fairness in Machine Learning (ML) models. We collect a total of 341 publications concerning …

What-is and how-to for fairness in machine learning: A survey, reflection, and perspective

Z Tang, J Zhang, K Zhang - ACM Computing Surveys, 2023 - dl.acm.org
We review and reflect on fairness notions proposed in machine learning literature and make
an attempt to draw connections to arguments in moral and political philosophy, especially …

Towards out-of-distribution generalization: A survey

J Liu, Z Shen, Y He, X Zhang, R Xu, H Yu… - arxiv preprint arxiv …, 2021 - arxiv.org
Traditional machine learning paradigms are based on the assumption that both training and
test data follow the same statistical pattern, which is mathematically referred to as …

Inherent tradeoffs in learning fair representations

H Zhao, GJ Gordon - Journal of Machine Learning Research, 2022 - jmlr.org
Real-world applications of machine learning tools in high-stakes domains are often
regulated to be fair, in the sense that the predicted target should satisfy some quantitative …

Quantifying and alleviating political bias in language models

R Liu, C Jia, J Wei, G Xu, S Vosoughi - Artificial Intelligence, 2022 - Elsevier
Current large-scale language models can be politically biased as a result of the data they
are trained on, potentially causing serious problems when they are deployed in real-world …

[PDF][PDF] On dyadic fairness: Exploring and mitigating bias in graph connections

P Li, Y Wang, H Zhao, P Hong, H Liu - International conference on …, 2021 - par.nsf.gov
Disparate impact has raised serious concerns in machine learning applications and its
societal impacts. In response to the need of mitigating discrimination, fairness has been …

Achieving fairness at no utility cost via data reweighing with influence

P Li, H Liu - International conference on machine learning, 2022 - proceedings.mlr.press
With the fast development of algorithmic governance, fairness has become a compulsory
property for machine learning models to suppress unintentional discrimination. In this paper …

Mitigating political bias in language models through reinforced calibration

R Liu, C Jia, J Wei, G Xu, L Wang… - Proceedings of the AAAI …, 2021 - ojs.aaai.org
Current large-scale language models can be politically biased as a result of the data they
are trained on, potentially causing serious problems when they are deployed in real-world …

Fair and optimal classification via post-processing

R **an, L Yin, H Zhao - International conference on machine …, 2023 - proceedings.mlr.press
To mitigate the bias exhibited by machine learning models, fairness criteria can be
integrated into the training process to ensure fair treatment across all demographics, but it …

On learning fairness and accuracy on multiple subgroups

C Shui, G Xu, Q Chen, J Li, CX Ling… - Advances in …, 2022 - proceedings.neurips.cc
We propose an analysis in fair learning that preserves the utility of the data while reducing
prediction disparities under the criteria of group sufficiency. We focus on the scenario where …