Personalized explanation in machine learning: A conceptualization

J Schneider, J Handali - arxiv preprint arxiv:1901.00770, 2019 - arxiv.org
Explanation in machine learning and related fields such as artificial intelligence aims at
making machine learning models and their decisions understandable to humans. Existing …

All those wasted hours: On task abandonment in crowdsourcing

L Han, K Roitero, U Gadiraju, C Sarasua… - Proceedings of the …, 2019 - dl.acm.org
Crowdsourcing has become a standard methodology to collect manually annotated data
such as relevance judgments at scale. On crowdsourcing platforms like Amazon MTurk or …

On the state of reporting in crowdsourcing experiments and a checklist to aid current practices

J Ramírez, B Sayin, M Baez, F Casati… - Proceedings of the …, 2021 - dl.acm.org
Crowdsourcing is being increasingly adopted as a platform to run studies with human
subjects. Running a crowdsourcing experiment involves several choices and strategies to …

Perspectives on large language models for relevance judgment

G Faggioli, L Dietz, CLA Clarke, G Demartini… - Proceedings of the …, 2023 - dl.acm.org
When asked, large language models~(LLMs) like ChatGPT claim that they can assist with
relevance judgments but it is not clear whether automated judgments can reliably be used in …

The impact of task abandonment in crowdsourcing

L Han, K Roitero, U Gadiraju, C Sarasua… - … on Knowledge and …, 2019 - ieeexplore.ieee.org
Crowdsourcing has become a standard methodology to collect manually annotated data
such as relevance judgments at scale. On crowdsourcing platforms like Amazon MTurk or …

Crowd worker strategies in relevance judgment tasks

L Han, E Maddalena, A Checco, C Sarasua… - Proceedings of the 13th …, 2020 - dl.acm.org
Crowdsourcing is a popular technique to collect large amounts of human-generated labels,
such as relevance judgments used to create information retrieval (IR) evaluation collections …

On the role of human and machine metadata in relevance judgment tasks

J Xu, L Han, S Sadiq, G Demartini - Information Processing & Management, 2023 - Elsevier
In order to evaluate the effectiveness of Information Retrieval (IR) systems it is key to collect
relevance judgments from human assessors. Crowdsourcing has successfully been used as …

Adaptation in information search and decision-making under time constraints

A Crescenzi, R Capra, B Choi, Y Li - … of the 2021 conference on human …, 2021 - dl.acm.org
Prior work in IR has found that searchers under time constraints may adapt their search
processes and perceive their task or their performance differently. In many of these prior …

On fine-grained relevance scales

K Roitero, E Maddalena, G Demartini… - The 41st international …, 2018 - dl.acm.org
In Information Retrieval evaluation, the classical approach of adopting binary relevance
judgments has been replaced by multi-level relevance judgments and by gain-based …

A test collection for evaluating legal case law search

D Locke, G Zuccon - The 41st International ACM SIGIR Conference on …, 2018 - dl.acm.org
Test collection based evaluation represents the standard of evalua-tion for information
retrieval systems. Legal IR, more speci cally case law retrieval, has no such standard test …