Diversity and inclusion in artificial intelligence

E Fosch-Villaronga, A Poulsen - … : Regulating AI and Applying AI in Legal …, 2022 - Springer
Discrimination and bias are inherent problems of many AI applications, as seen in, for
instance, face recognition systems not recognizing dark-skinned women and content …

Intrinsic bias metrics do not correlate with application bias

S Goldfarb-Tarrant, R Marchant, RM Sánchez… - arxiv preprint arxiv …, 2020 - arxiv.org
Natural Language Processing (NLP) systems learn harmful societal biases that cause them
to amplify inequality as they are deployed in more and more situations. To guide efforts at …

A survey on gender bias in natural language processing

K Stanczak, I Augenstein - arxiv preprint arxiv:2112.14168, 2021 - arxiv.org
Language can be used as a means of reproducing and enforcing harmful stereotypes and
biases and has been analysed as such in numerous research. In this paper, we present a …

Towards debiasing sentence representations

PP Liang, IM Li, E Zheng, YC Lim… - arxiv preprint arxiv …, 2020 - arxiv.org
As natural language processing methods are increasingly deployed in real-world scenarios
such as healthcare, legal systems, and social science, it becomes necessary to recognize …

You reap what you sow: On the challenges of bias evaluation under multilingual settings

Z Talat, A Névéol, S Biderman, M Clinciu… - … # 5--Workshop on …, 2022 - aclanthology.org
Evaluating bias, fairness, and social impact in monolingual language models is a difficult
task. This challenge is further compounded when language modeling occurs in a …

[PDF][PDF] HONEST: Measuring hurtful sentence completion in language models

D Nozza, F Bianchi, D Hovy - … of the 2021 Conference of the …, 2021 - iris.unibocconi.it
Abstract Language models have revolutionized the field of NLP. However, language models
capture and proliferate hurtful stereotypes, especially in text generation. Our results show …