On the explainability of natural language processing deep models
Despite their success, deep networks are used as black-box models with outputs that are not
easily explainable during the learning and the prediction phases. This lack of interpretability …
easily explainable during the learning and the prediction phases. This lack of interpretability …
Rationalization for explainable NLP: a survey
Recent advances in deep learning have improved the performance of many Natural
Language Processing (NLP) tasks such as translation, question-answering, and text …
Language Processing (NLP) tasks such as translation, question-answering, and text …
Towards out-of-distribution generalization: A survey
Traditional machine learning paradigms are based on the assumption that both training and
test data follow the same statistical pattern, which is mathematically referred to as …
test data follow the same statistical pattern, which is mathematically referred to as …
Interpretable and generalizable graph learning via stochastic attention mechanism
Interpretable graph learning is in need as many scientific applications depend on learning
models to collect insights from graph-structured data. Previous works mostly focused on …
models to collect insights from graph-structured data. Previous works mostly focused on …
Causality inspired representation learning for domain generalization
Abstract Domain generalization (DG) is essentially an out-of-distribution problem, aiming to
generalize the knowledge learned from multiple source domains to an unseen target …
generalize the knowledge learned from multiple source domains to an unseen target …
Let invariant rationale discovery inspire graph contrastive learning
Leading graph contrastive learning (GCL) methods perform graph augmentations in two
fashions:(1) randomly corrupting the anchor graph, which could cause the loss of semantic …
fashions:(1) randomly corrupting the anchor graph, which could cause the loss of semantic …
Discovering invariant rationales for graph neural networks
Intrinsic interpretability of graph neural networks (GNNs) is to find a small subset of the input
graph's features--rationale--which guides the model prediction. Unfortunately, the leading …
graph's features--rationale--which guides the model prediction. Unfortunately, the leading …
Learning invariant graph representations for out-of-distribution generalization
Graph representation learning has shown effectiveness when testing and training graph
data come from the same distribution, but most existing approaches fail to generalize under …
data come from the same distribution, but most existing approaches fail to generalize under …
Fishr: Invariant gradient variances for out-of-distribution generalization
Learning robust models that generalize well under changes in the data distribution is critical
for real-world applications. To this end, there has been a growing surge of interest to learn …
for real-world applications. To this end, there has been a growing surge of interest to learn …
Improving out-of-distribution robustness via selective augmentation
Abstract Machine learning algorithms typically assume that training and test examples are
drawn from the same distribution. However, distribution shift is a common problem in real …
drawn from the same distribution. However, distribution shift is a common problem in real …