Bias mitigation for machine learning classifiers: A comprehensive survey

M Hort, Z Chen, JM Zhang, M Harman… - ACM Journal on …, 2024‏ - dl.acm.org
This article provides a comprehensive survey of bias mitigation methods for achieving
fairness in Machine Learning (ML) models. We collect a total of 341 publications concerning …

A systematic review of fairness in machine learning

RT Rabonato, L Berton - AI and Ethics, 2024‏ - Springer
Abstract Fairness in Machine Learning (ML) has emerged as a crucial concern as these
models increasingly influence critical decisions in various domains, including healthcare …

Fairness without demographic data: A survey of approaches

C Ashurst, A Weller - Proceedings of the 3rd ACM Conference on Equity …, 2023‏ - dl.acm.org
Detecting, measuring and mitigating various measures of unfairness are core aims of
algorithmic fairness research. However, the most prominent approaches require access to …

When less is enough: Positive and unlabeled learning model for vulnerability detection

XC Wen, X Wang, C Gao, S Wang… - 2023 38th IEEE/ACM …, 2023‏ - ieeexplore.ieee.org
Automated code vulnerability detection has gained increasing attention in recent years. The
deep learning (DL)-based methods, which implicitly learn vulnerable code patterns, have …

Adapting fairness interventions to missing values

R Feng, F Calmon, H Wang - Advances in Neural …, 2023‏ - proceedings.neurips.cc
Missing values in real-world data pose a significant and unique challenge to algorithmic
fairness. Different demographic groups may be unequally affected by missing data, and the …

Fairness and sequential decision making: Limits, lessons, and opportunities

SB Nashed, J Svegliato, SL Blodgett - arxiv preprint arxiv:2301.05753, 2023‏ - arxiv.org
As automated decision making and decision assistance systems become common in
everyday life, research on the prevention or mitigation of potential harms that arise from …

Mitigating source bias for fairer weak supervision

C Shin, S Cromp, D Adila… - Advances in Neural …, 2023‏ - proceedings.neurips.cc
Weak supervision enables efficient development of training sets by reducing the need for
ground truth labels. However, the techniques that make weak supervision attractive---such …

Fairif: Boosting fairness in deep learning via influence functions with validation set sensitive attributes

H Wang, Z Wu, J He - Proceedings of the 17th ACM International …, 2024‏ - dl.acm.org
Empirical loss minimization during machine learning training can inadvertently introduce
bias, stemming from discrimination and societal prejudices present in the data. To address …

[PDF][PDF] Challenges for AI in healthcare systems

M Bertl, Y Lamo, M Leucker, T Margaria… - … on Bridging the Gap …, 2023‏ - library.oapen.org
This paper overviews the challenges of using artificial intelligence (AI) methods when
building healthcare systems, as discussed at the AIsola Conference in 2023. It focuses on …

From Individual Experience to Collective Evidence: A Reporting-Based Framework for Identifying Systemic Harms

J Dai, P Gradu, ID Raji, B Recht - arxiv preprint arxiv:2502.08166, 2025‏ - arxiv.org
When an individual reports a negative interaction with some system, how can their personal
experience be contextualized within broader patterns of system behavior? We study the …