Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Going beyond xai: A systematic survey for explanation-guided learning
As the societal impact of Deep Neural Networks (DNNs) grows, the goals for advancing
DNNs become more complex and diverse, ranging from improving a conventional model …
DNNs become more complex and diverse, ranging from improving a conventional model …
Exaranker: Synthetic explanations improve neural rankers
Recent work has shown that incorporating explanations into the output generated by large
language models (LLMs) can significantly enhance performance on a broad spectrum of …
language models (LLMs) can significantly enhance performance on a broad spectrum of …
D-separation for causal self-explanation
Rationalization aims to strengthen the interpretability of NLP models by extracting a subset
of human-intelligible pieces of their inputting texts. Conventional works generally employ the …
of human-intelligible pieces of their inputting texts. Conventional works generally employ the …
Studying how to efficiently and effectively guide models with explanations
Despite being highly performant, deep neural networks might base their decisions on
features that spuriously correlate with the provided labels, thus hurting generalization. To …
features that spuriously correlate with the provided labels, thus hurting generalization. To …
The inside story: Towards better understanding of machine translation neural evaluation metrics
Neural metrics for machine translation evaluation, such as COMET, exhibit significant
improvements in their correlation with human judgments, as compared to traditional metrics …
improvements in their correlation with human judgments, as compared to traditional metrics …
Leveraging saliency priors and explanations for enhanced consistent interpretability
L Dong, L Chen, Z Fu, C Zheng, X Cui… - Expert Systems with …, 2024 - Elsevier
Deep neural networks have emerged as highly effective tools for computer vision systems,
showcasing remarkable performance. However, the intrinsic opacity, potential biases, and …
showcasing remarkable performance. However, the intrinsic opacity, potential biases, and …
Exaranker: Explanation-augmented neural ranker
Recent work has shown that inducing a large language model (LLM) to generate
explanations prior to outputting an answer is an effective strategy to improve performance on …
explanations prior to outputting an answer is an effective strategy to improve performance on …
Enhancing the rationale-input alignment for self-explaining rationalization
Rationalization empowers deep learning models with self-explaining capabilities through a
cooperative game, where a generator selects a semantically consistent subset of the input …
cooperative game, where a generator selects a semantically consistent subset of the input …
Induced natural language rationales and interleaved markup tokens enable extrapolation in large language models
The ability to extrapolate, ie, to make predictions on sequences that are longer than those
presented as training examples, is a challenging problem for current deep learning models …
presented as training examples, is a challenging problem for current deep learning models …
Proto-lm: A prototypical network-based framework for built-in interpretability in large language models
Large Language Models (LLMs) have significantly advanced the field of Natural Language
Processing (NLP), but their lack of interpretability has been a major concern. Current …
Processing (NLP), but their lack of interpretability has been a major concern. Current …