Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Towards bidirectional human-ai alignment: A systematic review for clarifications, framework, and future directions
Recent advancements in general-purpose AI have highlighted the importance of guiding AI
systems towards the intended goals, ethical principles, and values of individuals and …
systems towards the intended goals, ethical principles, and values of individuals and …
A critical survey on fairness benefits of explainable AI
In this critical survey, we analyze typical claims on the relationship between explainable AI
(XAI) and fairness to disentangle the multidimensional relationship between these two …
(XAI) and fairness to disentangle the multidimensional relationship between these two …
" I'm Not Sure, But...": Examining the Impact of Large Language Models' Uncertainty Expression on User Reliance and Trust
Widely deployed large language models (LLMs) can produce convincing yet incorrect
outputs, potentially misleading users who may rely on them as if they were correct. To …
outputs, potentially misleading users who may rely on them as if they were correct. To …
Enhancing AI-Assisted Group Decision Making through LLM-Powered Devil's Advocate
Group decision making plays a crucial role in our complex and interconnected world. The
rise of AI technologies has the potential to provide data-driven insights to facilitate group …
rise of AI technologies has the potential to provide data-driven insights to facilitate group …
In search of verifiability: Explanations rarely enable complementary performance in AI‐advised decision making
The current literature on AI‐advised decision making—involving explainable AI systems
advising human decision makers—presents a series of inconclusive and confounding …
advising human decision makers—presents a series of inconclusive and confounding …
Hill: A hallucination identifier for large language models
Large language models (LLMs) are prone to hallucinations, ie, nonsensical, unfaithful, and
undesirable text. Users tend to overrely on LLMs and corresponding hallucinations which …
undesirable text. Users tend to overrely on LLMs and corresponding hallucinations which …
“Are you really sure?” Understanding the effects of human self-confidence calibration in AI-assisted decision making
In AI-assisted decision-making, it is crucial but challenging for humans to achieve
appropriate reliance on AI. This paper approaches this problem from a human-centered …
appropriate reliance on AI. This paper approaches this problem from a human-centered …
Leveraging chatgpt for automated human-centered explanations in recommender systems
The adoption of recommender systems (RSs) in various domains has become increasingly
popular, but concerns have been raised about their lack of transparency and interpretability …
popular, but concerns have been raised about their lack of transparency and interpretability …
The impact of imperfect XAI on human-AI decision-making
Explainability techniques are rapidly being developed to improve human-AI decision-
making across various cooperative work settings. Consequently, previous research has …
making across various cooperative work settings. Consequently, previous research has …
Does more advice help? The effects of second opinions in AI-assisted decision making
AI assistance in decision-making has become popular, yet people's inappropriate reliance
on AI often leads to unsatisfactory human-AI collaboration performance. In this paper …
on AI often leads to unsatisfactory human-AI collaboration performance. In this paper …