Going beyond xai: A systematic survey for explanation-guided learning

Y Gao, S Gu, J Jiang, SR Hong, D Yu, L Zhao - ACM Computing Surveys, 2024 - dl.acm.org
As the societal impact of Deep Neural Networks (DNNs) grows, the goals for advancing
DNNs become more complex and diverse, ranging from improving a conventional model …

Exaranker: Synthetic explanations improve neural rankers

F Ferraretto, T Laitz, R Lotufo, R Nogueira - Proceedings of the 46th …, 2023 - dl.acm.org
Recent work has shown that incorporating explanations into the output generated by large
language models (LLMs) can significantly enhance performance on a broad spectrum of …

D-separation for causal self-explanation

W Liu, J Wang, H Wang, R Li, Z Deng… - Advances in Neural …, 2023 - proceedings.neurips.cc
Rationalization aims to strengthen the interpretability of NLP models by extracting a subset
of human-intelligible pieces of their inputting texts. Conventional works generally employ the …

Studying how to efficiently and effectively guide models with explanations

S Rao, M Böhle, A Parchami-Araghi… - Proceedings of the …, 2023 - openaccess.thecvf.com
Despite being highly performant, deep neural networks might base their decisions on
features that spuriously correlate with the provided labels, thus hurting generalization. To …

The inside story: Towards better understanding of machine translation neural evaluation metrics

R Rei, NM Guerreiro, M Treviso, L Coheur… - arxiv preprint arxiv …, 2023 - arxiv.org
Neural metrics for machine translation evaluation, such as COMET, exhibit significant
improvements in their correlation with human judgments, as compared to traditional metrics …

Leveraging saliency priors and explanations for enhanced consistent interpretability

L Dong, L Chen, Z Fu, C Zheng, X Cui… - Expert Systems with …, 2024 - Elsevier
Deep neural networks have emerged as highly effective tools for computer vision systems,
showcasing remarkable performance. However, the intrinsic opacity, potential biases, and …

Exaranker: Explanation-augmented neural ranker

F Ferraretto, T Laitz, R Lotufo, R Nogueira - arxiv preprint arxiv …, 2023 - arxiv.org
Recent work has shown that inducing a large language model (LLM) to generate
explanations prior to outputting an answer is an effective strategy to improve performance on …

Enhancing the rationale-input alignment for self-explaining rationalization

W Liu, H Wang, J Wang, Z Deng… - 2024 IEEE 40th …, 2024 - ieeexplore.ieee.org
Rationalization empowers deep learning models with self-explaining capabilities through a
cooperative game, where a generator selects a semantically consistent subset of the input …

Induced natural language rationales and interleaved markup tokens enable extrapolation in large language models

M Bueno, C Gemmell, J Dalton, R Lotufo… - arxiv preprint arxiv …, 2022 - arxiv.org
The ability to extrapolate, ie, to make predictions on sequences that are longer than those
presented as training examples, is a challenging problem for current deep learning models …

Proto-lm: A prototypical network-based framework for built-in interpretability in large language models

S **e, S Vosoughi, S Hassanpour - arxiv preprint arxiv:2311.01732, 2023 - arxiv.org
Large Language Models (LLMs) have significantly advanced the field of Natural Language
Processing (NLP), but their lack of interpretability has been a major concern. Current …