Learning from disagreement: A survey

AN Uma, T Fornaciari, D Hovy, S Paun, B Plank… - Journal of Artificial …, 2021 - jair.org
Abstract Many tasks in Natural Language Processing (NLP) and Computer Vision (CV) offer
evidence that humans disagree, from objective tasks such as part-of-speech tagging to more …

A checklist to combat cognitive biases in crowdsourcing

T Draws, A Rieger, O Inel, U Gadiraju… - Proceedings of the AAAI …, 2021 - ojs.aaai.org
Recent research has demonstrated that cognitive biases such as the confirmation bias or the
anchoring effect can negatively affect the quality of crowdsourced data. In practice, however …

[PDF][PDF] Hacred: A large-scale relation extraction dataset toward hard cases in practical applications

Q Cheng, J Liu, X Qu, J Zhao, J Liang… - Findings of the …, 2021 - aclanthology.org
Relation extraction (RE) is an essential topic in natural language processing and has
attracted extensive attention. Current RE approaches achieve fantastic results on common …

Efficient elicitation approaches to estimate collective crowd answers

JJY Chung, JY Song, S Kutty, S Hong, J Kim… - Proceedings of the …, 2019 - dl.acm.org
When crowdsourcing the creation of machine learning datasets, statistical distributions that
capture diverse answers can represent ambiguous data better than a single best answer …

A citizen science approach for analyzing social media with crowdsourcing

C Bono, MO Mülâyim, C Cappiello, MJ Carman… - IEEE …, 2023 - ieeexplore.ieee.org
Social media have the potential to provide timely information about emergency situations
and sudden events. However, finding relevant information among the millions of posts being …

Establishing annotation quality in multi-label annotations

M Marchal, M Scholman, F Yung… - Proceedings of the 29th …, 2022 - aclanthology.org
In many linguistic fields requiring annotated data, multiple interpretations of a single item are
possible. Multi-label annotations more accurately reflect this possibility. However, allowing …

Human rationales as attribution priors for explainable stance detection

S Jayaram, E Allaway - Proceedings of the 2021 Conference on …, 2021 - aclanthology.org
As NLP systems become better at detecting opinions and beliefs from text, it is important to
ensure not only that models are accurate but also that they arrive at their predictions in ways …

Collect, measure, repeat: Reliability factors for responsible AI data collection

O Inel, T Draws, L Aroyo - Proceedings of the AAAI Conference on …, 2023 - ojs.aaai.org
The rapid entry of machine learning approaches in our daily activities and high-stakes
domains demands transparency and scrutiny of their fairness and reliability. To help gauge …

Human-annotated rationales and explainable text classification: a survey

E Herrewijnen, D Nguyen, F Bex… - Frontiers in Artificial …, 2024 - frontiersin.org
Asking annotators to explain “why” they labeled an instance yields annotator rationales:
natural language explanations that provide reasons for classifications. In this work, we …

Capturing perspectives of crowdsourced annotators in subjective learning tasks

N Mokhberian, MG Marmarelis, FR Hopp… - arxiv preprint arxiv …, 2023 - arxiv.org
In most classification models, it has been assumed to have a single ground truth label for
each data point. However, subjective tasks like toxicity classification can lead to genuine …