Scott: Self-consistent chain-of-thought distillation

P Wang, Z Wang, Z Li, Y Gao, B Yin, X Ren - arxiv preprint arxiv …, 2023 - arxiv.org
Large language models (LMs) beyond a certain scale, demonstrate the emergent capability
of generating free-text rationales for their predictions via chain-of-thought (CoT) prompting …

Faithfulness tests for natural language explanations

P Atanasova, OM Camburu, C Lioma… - arxiv preprint arxiv …, 2023 - arxiv.org
Explanations of neural models aim to reveal a model's decision-making process for its
predictions. However, recent work shows that current methods giving explanations such as …

Pinto: Faithful language reasoning using prompt-generated rationales

P Wang, A Chan, F Ilievski, M Chen, X Ren - arxiv preprint arxiv …, 2022 - arxiv.org
Neural language models (LMs) have achieved impressive results on various language-
based reasoning tasks by utilizing latent knowledge encoded in their own pretrained …

Hare: Explainable hate speech detection with step-by-step reasoning

Y Yang, J Kim, Y Kim, N Ho, J Thorne, S Yun - arxiv preprint arxiv …, 2023 - arxiv.org
With the proliferation of social media, accurate detection of hate speech has become critical
to ensure safety online. To combat nuanced forms of hate speech, it is important to identify …

Tailoring self-rationalizers with multi-reward distillation

S Ramnath, B Joshi, S Hallinan, X Lu, LH Li… - arxiv preprint arxiv …, 2023 - arxiv.org
Large language models (LMs) are capable of generating free-text rationales to aid question
answering. However, prior work 1) suggests that useful self-rationalization is emergent only …

Beyond labels: Empowering human annotators with natural language explanations through a novel active-learning architecture

B Yao, I **dal, L Popa, Y Katsis, S Ghosh, L He… - arxiv preprint arxiv …, 2023 - arxiv.org
Real-world domain experts (eg, doctors) rarely annotate only a decision label in their day-to-
day workflow without providing explanations. Yet, existing low-resource learning techniques …

Explanation-aware soft ensemble empowers large language model in-context learning

Y Yu, J Shen, T Liu, Z Qin, JN Yan, J Liu… - arxiv preprint arxiv …, 2023 - arxiv.org
Large language models (LLMs) have shown remarkable capabilities in various natural
language understanding tasks. With only a few demonstration examples, these LLMs can …

Are human explanations always helpful? towards objective evaluation of human natural language explanations

B Yao, P Sen, L Popa, J Hendler, D Wang - arxiv preprint arxiv …, 2023 - arxiv.org
Human-annotated labels and explanations are critical for training explainable NLP models.
However, unlike human-annotated labels whose quality is easier to calibrate (eg, with a …

REV: information-theoretic evaluation of free-text rationales

H Chen, F Brahman, X Ren, Y Ji, Y Choi… - arxiv preprint arxiv …, 2022 - arxiv.org
Generating free-text rationales is a promising step towards explainable NLP, yet evaluating
such rationales remains a challenge. Existing metrics have mostly focused on measuring the …

[PDF][PDF] Beyond labels: Empowering human with natural language explanations through a novel active-learning architecture

B Yao, I **dal, L Popa, Y Katsis, S Ghosh, L He, Y Lu… - 2023 - dspace.rpi.edu
Data annotation is a costly task; thus, researchers have proposed low-scenario learning
techniques like Active-Learning (AL) to support human annotators; Yet, existing AL works …