Diversity and inclusion in artificial intelligence
Discrimination and bias are inherent problems of many AI applications, as seen in, for
instance, face recognition systems not recognizing dark-skinned women and content …
instance, face recognition systems not recognizing dark-skinned women and content …
Intrinsic bias metrics do not correlate with application bias
Natural Language Processing (NLP) systems learn harmful societal biases that cause them
to amplify inequality as they are deployed in more and more situations. To guide efforts at …
to amplify inequality as they are deployed in more and more situations. To guide efforts at …
A survey on gender bias in natural language processing
Language can be used as a means of reproducing and enforcing harmful stereotypes and
biases and has been analysed as such in numerous research. In this paper, we present a …
biases and has been analysed as such in numerous research. In this paper, we present a …
Towards debiasing sentence representations
As natural language processing methods are increasingly deployed in real-world scenarios
such as healthcare, legal systems, and social science, it becomes necessary to recognize …
such as healthcare, legal systems, and social science, it becomes necessary to recognize …
You reap what you sow: On the challenges of bias evaluation under multilingual settings
Evaluating bias, fairness, and social impact in monolingual language models is a difficult
task. This challenge is further compounded when language modeling occurs in a …
task. This challenge is further compounded when language modeling occurs in a …
[PDF][PDF] HONEST: Measuring hurtful sentence completion in language models
Abstract Language models have revolutionized the field of NLP. However, language models
capture and proliferate hurtful stereotypes, especially in text generation. Our results show …
capture and proliferate hurtful stereotypes, especially in text generation. Our results show …