On the explainability of natural language processing deep models

JE Zini, M Awad - ACM Computing Surveys, 2022 - dl.acm.org
Despite their success, deep networks are used as black-box models with outputs that are not
easily explainable during the learning and the prediction phases. This lack of interpretability …

Rationalization for explainable NLP: a survey

S Gurrapu, A Kulkarni, L Huang… - Frontiers in Artificial …, 2023 - frontiersin.org
Recent advances in deep learning have improved the performance of many Natural
Language Processing (NLP) tasks such as translation, question-answering, and text …

Towards out-of-distribution generalization: A survey

J Liu, Z Shen, Y He, X Zhang, R Xu, H Yu… - arxiv preprint arxiv …, 2021 - arxiv.org
Traditional machine learning paradigms are based on the assumption that both training and
test data follow the same statistical pattern, which is mathematically referred to as …

Interpretable and generalizable graph learning via stochastic attention mechanism

S Miao, M Liu, P Li - International Conference on Machine …, 2022 - proceedings.mlr.press
Interpretable graph learning is in need as many scientific applications depend on learning
models to collect insights from graph-structured data. Previous works mostly focused on …

Causality inspired representation learning for domain generalization

F Lv, J Liang, S Li, B Zang, CH Liu… - Proceedings of the …, 2022 - openaccess.thecvf.com
Abstract Domain generalization (DG) is essentially an out-of-distribution problem, aiming to
generalize the knowledge learned from multiple source domains to an unseen target …

Let invariant rationale discovery inspire graph contrastive learning

S Li, X Wang, A Zhang, Y Wu, X He… - … on machine learning, 2022 - proceedings.mlr.press
Leading graph contrastive learning (GCL) methods perform graph augmentations in two
fashions:(1) randomly corrupting the anchor graph, which could cause the loss of semantic …

Discovering invariant rationales for graph neural networks

YX Wu, X Wang, A Zhang, X He, TS Chua - arxiv preprint arxiv …, 2022 - arxiv.org
Intrinsic interpretability of graph neural networks (GNNs) is to find a small subset of the input
graph's features--rationale--which guides the model prediction. Unfortunately, the leading …

Learning invariant graph representations for out-of-distribution generalization

H Li, Z Zhang, X Wang, W Zhu - Advances in Neural …, 2022 - proceedings.neurips.cc
Graph representation learning has shown effectiveness when testing and training graph
data come from the same distribution, but most existing approaches fail to generalize under …

Fishr: Invariant gradient variances for out-of-distribution generalization

A Rame, C Dancette, M Cord - International Conference on …, 2022 - proceedings.mlr.press
Learning robust models that generalize well under changes in the data distribution is critical
for real-world applications. To this end, there has been a growing surge of interest to learn …

Improving out-of-distribution robustness via selective augmentation

H Yao, Y Wang, S Li, L Zhang… - International …, 2022 - proceedings.mlr.press
Abstract Machine learning algorithms typically assume that training and test examples are
drawn from the same distribution. However, distribution shift is a common problem in real …