Having beer after prayer? measuring cultural bias in large language models

T Naous, MJ Ryan, A Ritter, W Xu - arxiv preprint arxiv:2305.14456, 2023 - arxiv.org
As the reach of large language models (LMs) expands globally, their ability to cater to
diverse cultural contexts becomes crucial. Despite advancements in multilingual …

How ready are pre-trained abstractive models and LLMs for legal case judgement summarization?

A Deroy, K Ghosh, S Ghosh - arxiv preprint arxiv:2306.01248, 2023 - arxiv.org
Automatic summarization of legal case judgements has traditionally been attempted by
using extractive summarization methods. However, in recent years, abstractive …

[PDF][PDF] Survey on sociodemographic bias in natural language processing

V Gupta, PN Venkit, S Wilson… - arxiv preprint arxiv …, 2023 - researchgate.net
Deep neural networks often learn unintended bias during training, which might have harmful
effects when deployed in realworld settings. This work surveys 214 papers related to …

[PDF][PDF] Pipelines for social bias testing of large language models

D Nozza, F Bianchi, D Hovy - Proceedings of BigScience …, 2022 - iris.unibocconi.it
The maturity level of language models is now at a stage in which many companies rely on
them to solve various tasks. However, while research has shown how biased and harmful …

Gender Bias in Natural Language Processing and Computer Vision: A Comparative Survey

M Bartl, A Mandal, S Leavy, S Little - ACM Computing Surveys, 2024 - dl.acm.org
Taking an interdisciplinary approach to surveying issues around gender bias in textual and
visual AI, we present literature on gender bias detection and mitigation in NLP, CV, as well …

Fairness in language models beyond English: Gaps and challenges

K Ramesh, S Sitaram, M Choudhury - arxiv preprint arxiv:2302.12578, 2023 - arxiv.org
With language models becoming increasingly ubiquitous, it has become essential to
address their inequitable treatment of diverse demographic groups and factors. Most …

On the independence of association bias and empirical fairness in language models

L Cabello, AK Jørgensen, A Søgaard - … of the 2023 ACM Conference on …, 2023 - dl.acm.org
The societal impact of pre-trained language models has prompted researchers to probe
them for strong associations between protected attributes and value-loaded terms, from slur …

Looking for a handsome carpenter! debiasing GPT-3 job advertisements

C Borchers, DS Gala, B Gilburt, E Oravkin… - arxiv preprint arxiv …, 2022 - arxiv.org
The growing capability and availability of generative language models has enabled a wide
range of new downstream tasks. Academic research has identified, quantified and mitigated …

Understanding stereotypes in language models: Towards robust measurement and zero-shot debiasing

J Mattern, Z **, M Sachan, R Mihalcea… - Ar**v, 2022 - research-collection.ethz.ch
Generated texts from large pretrained language models have been shown to exhibit a
variety of harmful, human-like biases about various demographics. These findings prompted …

Choose your lenses: Flaws in gender bias evaluation

H Orgad, Y Belinkov - arxiv preprint arxiv:2210.11471, 2022 - arxiv.org
Considerable efforts to measure and mitigate gender bias in recent years have led to the
introduction of an abundance of tasks, datasets, and metrics used in this vein. In this position …