On the explainability of natural language processing deep models

JE Zini, M Awad - ACM Computing Surveys, 2022 - dl.acm.org
Despite their success, deep networks are used as black-box models with outputs that are not
easily explainable during the learning and the prediction phases. This lack of interpretability …

Neural natural language processing for unstructured data in electronic health records: a review

I Li, J Pan, J Goldwasser, N Verma, WP Wong… - Computer Science …, 2022 - Elsevier
Electronic health records (EHRs), digital collections of patient healthcare events and
observations, are ubiquitous in medicine and critical to healthcare delivery, operations, and …

Trustworthy ai: A computational perspective

H Liu, Y Wang, W Fan, X Liu, Y Li, S Jain, Y Liu… - ACM Transactions on …, 2022 - dl.acm.org
In the past few decades, artificial intelligence (AI) technology has experienced swift
developments, changing everyone's daily life and profoundly altering the course of human …

A survey of the state of explainable AI for natural language processing

M Danilevsky, K Qian, R Aharonov, Y Katsis… - arxiv preprint arxiv …, 2020 - arxiv.org
Recent years have seen important advances in the quality of state-of-the-art models, but this
has come at the expense of models becoming less interpretable. This survey presents an …

[PDF][PDF] Towards faithful model explanation in nlp: A survey

Q Lyu, M Apidianaki, C Callison-Burch - Computational Linguistics, 2024 - direct.mit.edu
End-to-end neural Natural Language Processing (NLP) models are notoriously difficult to
understand. This has given rise to numerous efforts towards model explainability in recent …

Attention is not not explanation

S Wiegreffe, Y Pinter - arxiv preprint arxiv:1908.04626, 2019 - arxiv.org
Attention mechanisms play a central role in NLP systems, especially within recurrent neural
network (RNN) models. Recently, there has been increasing interest in whether or not the …

Attention is not explanation

S Jain, BC Wallace - arxiv preprint arxiv:1902.10186, 2019 - arxiv.org
Attention mechanisms have seen wide adoption in neural NLP models. In addition to
improving predictive performance, these are often touted as affording transparency: models …

Is attention explanation? an introduction to the debate

A Bibal, R Cardon, D Alfter, R Wilkens… - Proceedings of the …, 2022 - aclanthology.org
The performance of deep learning models in NLP and other fields of machine learning has
led to a rise in their popularity, and so the need for explanations of these models becomes …

Padchest: A large chest x-ray image dataset with multi-label annotated reports

A Bustos, A Pertusa, JM Salinas… - Medical image …, 2020 - Elsevier
We present a labeled large-scale, high resolution chest x-ray dataset for the automated
exploration of medical images along with their associated reports. This dataset includes …

Semantic probabilistic layers for neuro-symbolic learning

K Ahmed, S Teso, KW Chang… - Advances in …, 2022 - proceedings.neurips.cc
We design a predictive layer for structured-output prediction (SOP) that can be plugged into
any neural network guaranteeing its predictions are consistent with a set of predefined …