Scott: Self-consistent chain-of-thought distillation
Large language models (LMs) beyond a certain scale, demonstrate the emergent capability
of generating free-text rationales for their predictions via chain-of-thought (CoT) prompting …
of generating free-text rationales for their predictions via chain-of-thought (CoT) prompting …
Faithfulness tests for natural language explanations
Explanations of neural models aim to reveal a model's decision-making process for its
predictions. However, recent work shows that current methods giving explanations such as …
predictions. However, recent work shows that current methods giving explanations such as …
Pinto: Faithful language reasoning using prompt-generated rationales
Neural language models (LMs) have achieved impressive results on various language-
based reasoning tasks by utilizing latent knowledge encoded in their own pretrained …
based reasoning tasks by utilizing latent knowledge encoded in their own pretrained …
Hare: Explainable hate speech detection with step-by-step reasoning
With the proliferation of social media, accurate detection of hate speech has become critical
to ensure safety online. To combat nuanced forms of hate speech, it is important to identify …
to ensure safety online. To combat nuanced forms of hate speech, it is important to identify …
Tailoring self-rationalizers with multi-reward distillation
Large language models (LMs) are capable of generating free-text rationales to aid question
answering. However, prior work 1) suggests that useful self-rationalization is emergent only …
answering. However, prior work 1) suggests that useful self-rationalization is emergent only …
Beyond labels: Empowering human annotators with natural language explanations through a novel active-learning architecture
Real-world domain experts (eg, doctors) rarely annotate only a decision label in their day-to-
day workflow without providing explanations. Yet, existing low-resource learning techniques …
day workflow without providing explanations. Yet, existing low-resource learning techniques …
Explanation-aware soft ensemble empowers large language model in-context learning
Large language models (LLMs) have shown remarkable capabilities in various natural
language understanding tasks. With only a few demonstration examples, these LLMs can …
language understanding tasks. With only a few demonstration examples, these LLMs can …
Are human explanations always helpful? towards objective evaluation of human natural language explanations
Human-annotated labels and explanations are critical for training explainable NLP models.
However, unlike human-annotated labels whose quality is easier to calibrate (eg, with a …
However, unlike human-annotated labels whose quality is easier to calibrate (eg, with a …
REV: information-theoretic evaluation of free-text rationales
Generating free-text rationales is a promising step towards explainable NLP, yet evaluating
such rationales remains a challenge. Existing metrics have mostly focused on measuring the …
such rationales remains a challenge. Existing metrics have mostly focused on measuring the …
[PDF][PDF] Beyond labels: Empowering human with natural language explanations through a novel active-learning architecture
Data annotation is a costly task; thus, researchers have proposed low-scenario learning
techniques like Active-Learning (AL) to support human annotators; Yet, existing AL works …
techniques like Active-Learning (AL) to support human annotators; Yet, existing AL works …