Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Towards trustworthy ai: A review of ethical and robust large language models
The rapid progress in Large Language Models (LLMs) could transform many fields, but their
fast development creates significant challenges for oversight, ethical creation, and building …
fast development creates significant challenges for oversight, ethical creation, and building …
Nbias: A natural language processing framework for BIAS identification in text
Bias in textual data can lead to skewed interpretations and outcomes when the data is used.
These biases could perpetuate stereotypes, discrimination, or other forms of unfair …
These biases could perpetuate stereotypes, discrimination, or other forms of unfair …
Addressing bias in generative AI: Challenges and research opportunities in information management
X Wei, N Kumar, H Zhang - arxiv preprint arxiv:2502.10407, 2025 - arxiv.org
Generative AI technologies, particularly Large Language Models (LLMs), have transformed
information management systems but introduced substantial biases that can compromise …
information management systems but introduced substantial biases that can compromise …
ChatGPT based data augmentation for improved parameter-efficient debiasing of LLMs
Large Language models (LLMs), while powerful, exhibit harmful social biases. Debiasing is
often challenging due to computational costs, data constraints, and potential degradation of …
often challenging due to computational costs, data constraints, and potential degradation of …
[PDF][PDF] Metrics for what, metrics for whom: assessing actionability of bias evaluation metrics in NLP
This paper introduces the concept of actionability in the context of bias measures in natural
language processing (NLP). We define actionability as the degree to which a …
language processing (NLP). We define actionability as the degree to which a …
Fair Text Classification with Wasserstein Independence
Group fairness is a central research topic in text classification, where reaching fair treatment
between sensitive groups (eg women vs. men) remains an open challenge. This paper …
between sensitive groups (eg women vs. men) remains an open challenge. This paper …
Gender Bias Mitigation for Bangla Classification Tasks
SKS Joy, AH Mahy, M Sultana, AM Abha… - arxiv preprint arxiv …, 2024 - arxiv.org
In this study, we investigate gender bias in Bangla pretrained language models, a largely
under explored area in low-resource languages. To assess this bias, we applied gender …
under explored area in low-resource languages. To assess this bias, we applied gender …
A Data-Centric Approach to Detecting and Mitigating Demographic Bias in Pediatric Mental Health Text: A Case Study in Anxiety Detection
Introduction: Healthcare AI models often inherit biases from their training data. While efforts
have primarily targeted bias in structured data, mental health heavily depends on …
have primarily targeted bias in structured data, mental health heavily depends on …
REFINE-LM: Mitigating Language Model Stereotypes via Reinforcement Learning
With the introduction of (large) language models, there has been significant concern about
the unintended bias such models may inherit from their training data. A number of studies …
the unintended bias such models may inherit from their training data. A number of studies …
Breaking Bias: Alpha Weighted Loss in Multi-objective Learning Taming Gender Stereotypes
Navigating the uncertainties of job classification and gender bias, this paper presents multi-
objective learning approach using BERT-based model that concurrently handles maximizing …
objective learning approach using BERT-based model that concurrently handles maximizing …