People who share encounters with racism are silenced online by humans and machines, but a guideline-reframing intervention holds promise
Are members of marginalized communities silenced on social media when they share
personal experiences of racism? Here, we investigate the role of algorithms, humans, and …
personal experiences of racism? Here, we investigate the role of algorithms, humans, and …
HateDay: Insights from a Global Hate Speech Dataset Representative of a Day on Twitter
To tackle the global challenge of online hate speech, a large body of research has
developed detection models to flag hate speech in the sea of online content. Yet, due to …
developed detection models to flag hate speech in the sea of online content. Yet, due to …
On the role of speech data in reducing toxicity detection bias
Text toxicity detection systems exhibit significant biases, producing disproportionate rates of
false positives on samples mentioning demographic groups. But what about toxicity …
false positives on samples mentioning demographic groups. But what about toxicity …
[PDF][PDF] Auditing multimodal large language models for context-aware content moderation
T Davidson - 2024 - files.osf.io
The development of multimodal large language models (MLLMs) offers new possibilities for
context-aware content moderation by integrating text, images, and other data. This study …
context-aware content moderation by integrating text, images, and other data. This study …
기호화된 혐오 상징어 검색을 통한 혐오 표현 탐지
김유민, 이환희 - 대한전자공학회 학술대회, 2024 - dbpia.co.kr
As coded hate symbols intended for hate expressions are newly generated on the web, an
automatic hate speech detection system that can reflect the meaning of new coded hate …
automatic hate speech detection system that can reflect the meaning of new coded hate …