Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Foundations & trends in multimodal machine learning: Principles, challenges, and open questions
Multimodal machine learning is a vibrant multi-disciplinary research field that aims to design
computer agents with intelligent capabilities such as understanding, reasoning, and learning …
computer agents with intelligent capabilities such as understanding, reasoning, and learning …
Algorithms to estimate Shapley value feature attributions
Feature attributions based on the Shapley value are popular for explaining machine
learning models. However, their estimation is complex from both theoretical and …
learning models. However, their estimation is complex from both theoretical and …
Language in a bottle: Language model guided concept bottlenecks for interpretable image classification
Abstract Concept Bottleneck Models (CBM) are inherently interpretable models that factor
model decisions into human-readable concepts. They allow people to easily understand …
model decisions into human-readable concepts. They allow people to easily understand …
[HTML][HTML] Transparency of deep neural networks for medical image analysis: A review of interpretability methods
Artificial Intelligence (AI) has emerged as a useful aid in numerous clinical applications for
diagnosis and treatment decisions. Deep neural networks have shown the same or better …
diagnosis and treatment decisions. Deep neural networks have shown the same or better …
Interpretable and explainable machine learning: A methods‐centric overview with concrete examples
Interpretability and explainability are crucial for machine learning (ML) and statistical
applications in medicine, economics, law, and natural sciences and form an essential …
applications in medicine, economics, law, and natural sciences and form an essential …
Concept embedding models: Beyond the accuracy-explainability trade-off
Deploying AI-powered systems requires trustworthy models supporting effective human
interactions, going beyond raw prediction accuracy. Concept bottleneck models promote …
interactions, going beyond raw prediction accuracy. Concept bottleneck models promote …
Toward transparent ai: A survey on interpreting the inner structures of deep neural networks
The last decade of machine learning has seen drastic increases in scale and capabilities.
Deep neural networks (DNNs) are increasingly being deployed in the real world. However …
Deep neural networks (DNNs) are increasingly being deployed in the real world. However …
Interpretable machine learning: Fundamental principles and 10 grand challenges
Interpretability in machine learning (ML) is crucial for high stakes decisions and
troubleshooting. In this work, we provide fundamental principles for interpretable ML, and …
troubleshooting. In this work, we provide fundamental principles for interpretable ML, and …
From attribution maps to human-understandable explanations through concept relevance propagation
The field of explainable artificial intelligence (XAI) aims to bring transparency to today's
powerful but opaque deep learning models. While local XAI methods explain individual …
powerful but opaque deep learning models. While local XAI methods explain individual …
" help me help the ai": Understanding how explainability can support human-ai interaction
Despite the proliferation of explainable AI (XAI) methods, little is understood about end-
users' explainability needs and behaviors around XAI explanations. To address this gap and …
users' explainability needs and behaviors around XAI explanations. To address this gap and …