A survey on fairness in large language models

Y Li, M Du, R Song, X Wang, Y Wang - arxiv preprint arxiv:2308.10149, 2023 - arxiv.org
Large Language Models (LLMs) have shown powerful performance and development
prospects and are widely deployed in the real world. However, LLMs can capture social …

Beyond discrimination: Generative AI applications and ethical challenges in forensic psychiatry

L Tortora - Frontiers in Psychiatry, 2024 - frontiersin.org
The advent and growing popularity of generative artificial intelligence (GenAI) holds the
potential to revolutionise AI applications in forensic psychiatry and criminal justice, which …

Biases in large language models: origins, inventory, and discussion

R Navigli, S Conia, B Ross - ACM Journal of Data and Information …, 2023 - dl.acm.org
In this article, we introduce and discuss the pervasive issue of bias in the large language
models that are currently at the core of mainstream approaches to Natural Language …

Marketing with chatgpt: Navigating the ethical terrain of gpt-based chatbot technology

P Rivas, L Zhao - AI, 2023 - mdpi.com
ChatGPT is an AI-powered chatbot platform that enables human users to converse with
machines. It utilizes natural language processing and machine learning algorithms …

Lavt: Language-aware vision transformer for referring image segmentation

Z Yang, J Wang, Y Tang, K Chen… - Proceedings of the …, 2022 - openaccess.thecvf.com
Referring image segmentation is a fundamental vision-language task that aims to segment
out an object referred to by a natural language expression from an image. One of the key …

Having beer after prayer? measuring cultural bias in large language models

T Naous, MJ Ryan, A Ritter, W Xu - arxiv preprint arxiv:2305.14456, 2023 - arxiv.org
As the reach of large language models (LMs) expands globally, their ability to cater to
diverse cultural contexts becomes crucial. Despite advancements in multilingual …

Large language models are geographically biased

R Manvi, S Khanna, M Burke, D Lobell… - arxiv preprint arxiv …, 2024 - arxiv.org
Large Language Models (LLMs) inherently carry the biases contained in their training
corpora, which can lead to the perpetuation of societal harm. As the impact of these …

“They only care to show us the wheelchair”: Disability representation in text-to-image AI models

KA Mack, R Qadri, R Denton, SK Kane… - Proceedings of the 2024 …, 2024 - dl.acm.org
This paper reports on disability representation in images output from text-to-image (T2I)
generative AI systems. Through eight focus groups with 25 people with disabilities, we found …

" I wouldn't say offensive but...": Disability-Centered Perspectives on Large Language Models

V Gadiraju, S Kane, S Dev, A Taylor, D Wang… - Proceedings of the …, 2023 - dl.acm.org
Large language models (LLMs) trained on real-world data can inadvertently reflect harmful
societal biases, particularly toward historically marginalized communities. While previous …

A hunt for the snark: Annotator diversity in data practices

S Kapania, AS Taylor, D Wang - … of the 2023 CHI Conference on Human …, 2023 - dl.acm.org
Diversity in datasets is a key component to building responsible AI/ML. Despite this
recognition, we know little about the diversity among the annotators involved in data …