Teach me to explain: A review of datasets for explainable natural language processing
S Wiegreffe, A Marasović - ar** language-image pre-training for unified vision-language understanding and generation
Abstract Vision-Language Pre-training (VLP) has advanced the performance for many vision-
language tasks. However, most existing pre-trained models only excel in either …
language tasks. However, most existing pre-trained models only excel in either …
Large language models are visual reasoning coordinators
Visual reasoning requires multimodal perception and commonsense cognition of the world.
Recently, multiple vision-language models (VLMs) have been proposed with excellent …
Recently, multiple vision-language models (VLMs) have been proposed with excellent …
Align before fuse: Vision and language representation learning with momentum distillation
Large-scale vision and language representation learning has shown promising
improvements on various vision-language tasks. Most existing methods employ a …
improvements on various vision-language tasks. Most existing methods employ a …
Sentence representation method based on multi-layer semantic network
With the development of artificial intelligence, more and more people hope that computers
can understand human language through natural language technology, learn to think like …
can understand human language through natural language technology, learn to think like …
Measuring association between labels and free-text rationales
In interpretable NLP, we require faithful rationales that reflect the model's decision-making
process for an explained instance. While prior work focuses on extractive rationales (a …
process for an explained instance. While prior work focuses on extractive rationales (a …
Probing inter-modality: Visual parsing with self-attention for vision-and-language pre-training
Abstract Vision-Language Pre-training (VLP) aims to learn multi-modal representations from
image-text pairs and serves for downstream vision-language tasks in a fine-tuning fashion …
image-text pairs and serves for downstream vision-language tasks in a fine-tuning fashion …
Nlx-gpt: A model for natural language explanations in vision and vision-language tasks
F Sammani, T Mukherjee… - proceedings of the …, 2022 - openaccess.thecvf.com
Natural language explanation (NLE) models aim at explaining the decision-making process
of a black box system via generating natural language sentences which are human-friendly …
of a black box system via generating natural language sentences which are human-friendly …
Rigorously assessing natural language explanations of neurons
Natural language is an appealing medium for explaining how large language models
process and store information, but evaluating the faithfulness of such explanations is …
process and store information, but evaluating the faithfulness of such explanations is …
Natural language rationales with full-stack visual reasoning: From pixels to semantic frames to commonsense graphs
Natural language rationales could provide intuitive, higher-level explanations that are easily
understandable by humans, complementing the more broadly studied lower-level …
understandable by humans, complementing the more broadly studied lower-level …