A review of recent machine learning advances for forecasting harmful algal blooms and shellfish contamination

RC Cruz, P Reis Costa, S Vinga, L Krippahl… - Journal of Marine …, 2021 - mdpi.com
Harmful algal blooms (HABs) are among the most severe ecological marine problems
worldwide. Under favorable climate and oceanographic conditions, toxin-producing …

Interpreting deep learning models in natural language processing: A review

X Sun, D Yang, X Li, T Zhang, Y Meng, H Qiu… - arxiv preprint arxiv …, 2021 - arxiv.org
Neural network models have achieved state-of-the-art performances in a wide range of
natural language processing (NLP) tasks. However, a long-standing criticism against neural …

A survey of the state of explainable AI for natural language processing

M Danilevsky, K Qian, R Aharonov, Y Katsis… - arxiv preprint arxiv …, 2020 - arxiv.org
Recent years have seen important advances in the quality of state-of-the-art models, but this
has come at the expense of models becoming less interpretable. This survey presents an …

[PDF][PDF] Towards faithful model explanation in nlp: A survey

Q Lyu, M Apidianaki, C Callison-Burch - Computational Linguistics, 2024 - direct.mit.edu
End-to-end neural Natural Language Processing (NLP) models are notoriously difficult to
understand. This has given rise to numerous efforts towards model explainability in recent …

On interpretability of artificial neural networks: A survey

FL Fan, J **ong, M Li, G Wang - IEEE Transactions on …, 2021 - ieeexplore.ieee.org
Deep learning as performed by artificial deep neural networks (DNNs) has achieved great
successes recently in many important areas that deal with text, images, videos, graphs, and …

AllenNLP interpret: A framework for explaining predictions of NLP models

E Wallace, J Tuyls, J Wang, S Subramanian… - arxiv preprint arxiv …, 2019 - arxiv.org
Neural NLP models are increasingly accurate but are imperfect and opaque---they break in
counterintuitive ways and leave end users puzzled at their behavior. Model interpretation …

Trick me if you can: Human-in-the-loop generation of adversarial examples for question answering

E Wallace, P Rodriguez, S Feng, I Yamada… - Transactions of the …, 2019 - direct.mit.edu
Adversarial evaluation stress-tests a model's understanding of natural language. Because
past approaches expose superficial patterns, the resulting adversarial examples are limited …

How Case-Based Reasoning Explains Neural Networks: A Theoretical Analysis of XAI Using Post-Hoc Explanation-by-Example from a Survey of ANN-CBR Twin …

MT Keane, EM Kenny - … 2019, Otzenhausen, Germany, September 8–12 …, 2019 - Springer
This paper proposes a theoretical analysis of one approach to the eXplainable AI (XAI)
problem, using post-hoc explanation-by-example, that relies on the twinning of artificial …

Explainability does not mitigate the negative impact of incorrect AI advice in a personnel selection task

J Cecil, E Lermer, MFC Hudecek, J Sauer, S Gaube - Scientific reports, 2024 - nature.com
Despite the rise of decision support systems enabled by artificial intelligence (AI) in
personnel selection, their impact on decision-making processes is largely unknown …

Analyzing and interpreting neural networks for NLP: A report on the first BlackboxNLP workshop

A Alishahi, G Chrupała, T Linzen - Natural Language Engineering, 2019 - cambridge.org
The Empirical Methods in Natural Language Processing (EMNLP) 2018 workshop
BlackboxNLP was dedicated to resources and techniques specifically developed for …