Interpreting deep learning models in natural language processing: A review

X Sun, D Yang, X Li, T Zhang, Y Meng, H Qiu… - arxiv preprint arxiv …, 2021 - arxiv.org
Neural network models have achieved state-of-the-art performances in a wide range of
natural language processing (NLP) tasks. However, a long-standing criticism against neural …

Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification

H Li, C Zhu, Y Zhang, Y Sun, Z Shui… - Proceedings of the …, 2023 - openaccess.thecvf.com
Abstract While Multiple Instance Learning (MIL) has shown promising results in digital
Pathology Whole Slide Image (WSI) analysis, such a paradigm still faces performance and …

A review on fact extraction and verification

G Bekoulis, C Papagiannopoulou… - ACM Computing Surveys …, 2021 - dl.acm.org
We study the fact-checking problem, which aims to identify the veracity of a given claim.
Specifically, we focus on the task of Fact Extraction and VERification (FEVER) and its …

Few-shot self-rationalization with natural language prompts

A Marasović, I Beltagy, D Downey… - arxiv preprint arxiv …, 2021 - arxiv.org
Self-rationalization models that predict task labels and generate free-text elaborations for
their predictions could enable more intuitive interaction with NLP systems. These models …

Measuring association between labels and free-text rationales

S Wiegreffe, A Marasović, NA Smith - arxiv preprint arxiv:2010.12762, 2020 - arxiv.org
In interpretable NLP, we require faithful rationales that reflect the model's decision-making
process for an explained instance. While prior work focuses on extractive rationales (a …

A novel approach for effective multi-view clustering with information-theoretic perspective

C Cui, Y Ren, J Pu, J Li, X Pu, T Wu… - Advances in Neural …, 2024 - proceedings.neurips.cc
Multi-view clustering (MVC) is a popular technique for improving clustering performance
using various data sources. However, existing methods primarily focus on acquiring …

The out-of-distribution problem in explainability and search methods for feature importance explanations

P Hase, H **e, M Bansal - Advances in neural information …, 2021 - proceedings.neurips.cc
Feature importance (FI) estimates are a popular form of explanation, and they are commonly
created and evaluated by computing the change in model confidence caused by removing …

Can rationalization improve robustness?

H Chen, J He, K Narasimhan, D Chen - arxiv preprint arxiv:2204.11790, 2022 - arxiv.org
A growing line of work has investigated the development of neural NLP models that can
produce rationales--subsets of input that can explain their model predictions. In this paper …

Prompting contrastive explanations for commonsense reasoning tasks

B Paranjape, J Michael, M Ghazvininejad… - arxiv preprint arxiv …, 2021 - arxiv.org
Many commonsense reasoning NLP tasks involve choosing between one or more possible
answers to a question or prompt based on knowledge that is often implicit. Large pretrained …

Explainable legal case matching via inverse optimal transport-based rationale extraction

W Yu, Z Sun, J Xu, Z Dong, X Chen, H Xu… - Proceedings of the 45th …, 2022 - dl.acm.org
As an essential operation of legal retrieval, legal case matching plays a central role in
intelligent legal systems. This task has a high demand on the explainability of matching …