Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Personalized explanation in machine learning: A conceptualization
J Schneider, J Handali - arxiv preprint arxiv:1901.00770, 2019 - arxiv.org
Explanation in machine learning and related fields such as artificial intelligence aims at
making machine learning models and their decisions understandable to humans. Existing …
making machine learning models and their decisions understandable to humans. Existing …
All those wasted hours: On task abandonment in crowdsourcing
Crowdsourcing has become a standard methodology to collect manually annotated data
such as relevance judgments at scale. On crowdsourcing platforms like Amazon MTurk or …
such as relevance judgments at scale. On crowdsourcing platforms like Amazon MTurk or …
On the state of reporting in crowdsourcing experiments and a checklist to aid current practices
Crowdsourcing is being increasingly adopted as a platform to run studies with human
subjects. Running a crowdsourcing experiment involves several choices and strategies to …
subjects. Running a crowdsourcing experiment involves several choices and strategies to …
Perspectives on large language models for relevance judgment
When asked, large language models~(LLMs) like ChatGPT claim that they can assist with
relevance judgments but it is not clear whether automated judgments can reliably be used in …
relevance judgments but it is not clear whether automated judgments can reliably be used in …
The impact of task abandonment in crowdsourcing
Crowdsourcing has become a standard methodology to collect manually annotated data
such as relevance judgments at scale. On crowdsourcing platforms like Amazon MTurk or …
such as relevance judgments at scale. On crowdsourcing platforms like Amazon MTurk or …
Crowd worker strategies in relevance judgment tasks
Crowdsourcing is a popular technique to collect large amounts of human-generated labels,
such as relevance judgments used to create information retrieval (IR) evaluation collections …
such as relevance judgments used to create information retrieval (IR) evaluation collections …
On the role of human and machine metadata in relevance judgment tasks
In order to evaluate the effectiveness of Information Retrieval (IR) systems it is key to collect
relevance judgments from human assessors. Crowdsourcing has successfully been used as …
relevance judgments from human assessors. Crowdsourcing has successfully been used as …
Adaptation in information search and decision-making under time constraints
Prior work in IR has found that searchers under time constraints may adapt their search
processes and perceive their task or their performance differently. In many of these prior …
processes and perceive their task or their performance differently. In many of these prior …
On fine-grained relevance scales
In Information Retrieval evaluation, the classical approach of adopting binary relevance
judgments has been replaced by multi-level relevance judgments and by gain-based …
judgments has been replaced by multi-level relevance judgments and by gain-based …
A test collection for evaluating legal case law search
Test collection based evaluation represents the standard of evalua-tion for information
retrieval systems. Legal IR, more speci cally case law retrieval, has no such standard test …
retrieval systems. Legal IR, more speci cally case law retrieval, has no such standard test …