Bias mitigation for machine learning classifiers: A comprehensive survey

M Hort, Z Chen, JM Zhang, M Harman… - ACM Journal on …, 2024 - dl.acm.org
This article provides a comprehensive survey of bias mitigation methods for achieving
fairness in Machine Learning (ML) models. We collect a total of 341 publications concerning …

Freeze then train: Towards provable representation learning under spurious correlations and feature noise

H Ye, J Zou, L Zhang - International Conference on Artificial …, 2023 - proceedings.mlr.press
The existence of spurious correlations such as image backgrounds in the training
environment can make empirical risk minimization (ERM) perform badly in the test …

Last-layer fairness fine-tuning is simple and effective for neural networks

Y Mao, Z Deng, H Yao, T Ye, K Kawaguchi… - arxiv preprint arxiv …, 2023 - arxiv.org
As machine learning has been deployed ubiquitously across applications in modern data
science, algorithmic fairness has become a great concern. Among them, imposing fairness …

Fair scratch tickets: Finding fair sparse networks without weight training

P Tang, W Yao, Z Li, Y Liu - … of the IEEE/CVF Conference on …, 2023 - openaccess.thecvf.com
Recent studies suggest that computer vision models come at the risk of compromising
fairness. There are extensive works to alleviate unfairness in computer vision using pre …

Reinforcement learning with stepwise fairness constraints

Z Deng, H Sun, ZS Wu, L Zhang, DC Parkes - arxiv preprint arxiv …, 2022 - arxiv.org
AI methods are used in societally important settings, ranging from credit to employment to
housing, and it is crucial to provide fairness in regard to algorithmic decision making …

Sifting through the chaff: On utilizing execution feedback for ranking the generated code candidates

Z Sun, Y Wan, J Li, H Zhang, Z **, G Li… - Proceedings of the 39th …, 2024 - dl.acm.org
Large Language Models (LLMs), such as GPT-4, StarCoder, and Code Llama, are
transforming the way developers approach programming by automatically generating code …

Properties of fairness measures in the context of varying class imbalance and protected group ratios

D Brzezinski, J Stachowiak, J Stefanowski… - ACM Transactions on …, 2024 - dl.acm.org
Society is increasingly relying on predictive models in fields like criminal justice, credit risk
management, and hiring. To prevent such automated systems from discriminating against …

A Critical Review of Predominant Bias in Neural Networks

J Li, M Khayatkhoei, J Zhu, H **e, ME Hussein… - arxiv preprint arxiv …, 2025 - arxiv.org
Bias issues of neural networks garner significant attention along with its promising
advancement. Among various bias issues, mitigating two predominant biases is crucial in …

Mitigating algorithmic bias with limited annotations

G Wang, M Du, N Liu, N Zou, X Hu - Joint European Conference on …, 2023 - Springer
Existing work on fairness modeling commonly assumes that sensitive attributes for all
instances are fully available, which may not be true in many real-world applications due to …

A theoretical approach to characterize the accuracy-fairness trade-off pareto frontier

H Tang, L Cheng, N Liu, M Du - arxiv preprint arxiv:2310.12785, 2023 - arxiv.org
While the accuracy-fairness trade-off has been frequently observed in the literature of fair
machine learning, rigorous theoretical analyses have been scarce. To demystify this long …