A survey on task assignment in crowdsourcing

D Hettiachchi, V Kostakos, J Goncalves - ACM Computing Surveys …, 2022 - dl.acm.org
Quality improvement methods are essential to gathering high-quality crowdsourced data,
both for research and industry applications. A popular and broadly applicable method is task …

Toward verifiable and reproducible human evaluation for text-to-image generation

M Otani, R Togashi, Y Sawai… - Proceedings of the …, 2023 - openaccess.thecvf.com
Human evaluation is critical for validating the performance of text-to-image generative
models, as this highly cognitive process requires deep comprehension of text and images …

Crowdworksheets: Accounting for individual and collective identities underlying crowdsourced dataset annotation

M Díaz, I Kivlichan, R Rosen, D Baker… - Proceedings of the …, 2022 - dl.acm.org
Human annotated data plays a crucial role in machine learning (ML) research and
development. However, the ethical considerations around the processes and decisions that …

Quantifying the invisible labor in crowd work

C Toxtli, S Suri, S Savage - Proceedings of the ACM on human-computer …, 2021 - dl.acm.org
Crowdsourcing markets provide workers with a centralized place to find paid work. What
may not be obvious at first glance is that, in addition to the work they do for pay, crowd …

Examining technostress and its impact on worker well-being in the digital gig economy

A Umair, K Conboy, E Whelan - Internet Research, 2023 - emerald.com
Purpose Online labour markets (OLMs) have recently become a widespread phenomenon
of digital work. While the implications of OLMs on worker well-being are hotly debated, little …

Is it ethical to use Mechanical Turk for behavioral research? Relevant data from a representative survey of MTurk participants and wages

AJ Moss, C Rosenzweig, J Robinson, SN Jaffe… - Behavior Research …, 2023 - Springer
To understand human behavior, social scientists need people and data. In the last decade,
Amazon's Mechanical Turk (MTurk) emerged as a flexible, affordable, and reliable source of …

Rehumanized crowdsourcing: A labeling framework addressing bias and ethics in machine learning

NM Barbosa, M Chen - Proceedings of the 2019 CHI Conference on …, 2019 - dl.acm.org
The increased use of machine learning in recent years led to large volumes of data being
manually labeled via crowdsourcing microtasks completed by humans. This brought about …

Eccv caption: Correcting false negatives by collecting machine-and-human-verified image-caption associations for ms-coco

S Chun, W Kim, S Park, M Chang, SJ Oh - European Conference on …, 2022 - Springer
Image-Text matching (ITM) is a common task for evaluating the quality of Vision and
Language (VL) models. However, existing ITM benchmarks have a significant limitation …

Fair work: Crowd work minimum wage with one line of code

ME Whiting, G Hugh, MS Bernstein - … of the AAAI Conference on Human …, 2019 - ojs.aaai.org
Accurate task pricing in microtask marketplaces requires substantial effort via trial and error,
contributing to a pattern of worker underpayment. In response, we introduce Fair Work …

Who broke Amazon Mechanical Turk? An analysis of crowdsourcing data quality over time

CC Marshall, PSR Goguladinne… - Proceedings of the 15th …, 2023 - dl.acm.org
We present the results of a survey fielded in June of 2022 as a lens to examine recent data
reliability issues on Amazon Mechanical Turk. We contrast bad data from this survey with …