Leveraging explanations in interactive machine learning: An overview
Explanations have gained an increasing level of interest in the AI and Machine Learning
(ML) communities in order to improve model transparency and allow users to form a mental …
(ML) communities in order to improve model transparency and allow users to form a mental …
Distilling step-by-step! outperforming larger language models with less training data and smaller model sizes
Deploying large language models (LLMs) is challenging because they are memory
inefficient and compute-intensive for practical applications. In reaction, researchers train …
inefficient and compute-intensive for practical applications. In reaction, researchers train …
Challenging big-bench tasks and whether chain-of-thought can solve them
BIG-Bench (Srivastava et al., 2022) is a diverse evaluation suite that focuses on tasks
believed to be beyond the capabilities of current language models. Language models have …
believed to be beyond the capabilities of current language models. Language models have …
Chain-of-thought prompting elicits reasoning in large language models
We explore how generating a chain of thought---a series of intermediate reasoning steps---
significantly improves the ability of large language models to perform complex reasoning. In …
significantly improves the ability of large language models to perform complex reasoning. In …
Can language models learn from explanations in context?
Language Models (LMs) can perform new tasks by adapting to a few in-context examples.
For humans, explanations that connect examples to task principles can improve learning …
For humans, explanations that connect examples to task principles can improve learning …
Cross-task generalization via natural language crowdsourcing instructions
Humans (eg, crowdworkers) have a remarkable ability in solving different tasks, by simply
reading textual instructions that define them and looking at a few examples. Despite the …
reading textual instructions that define them and looking at a few examples. Despite the …
Cumulative reasoning with large language models
While language models are powerful and versatile, they often fail to address highly complex
problems. This is because solving complex problems requires deliberate thinking, which has …
problems. This is because solving complex problems requires deliberate thinking, which has …
Symbolic chain-of-thought distillation: Small models can also" think" step-by-step
Chain-of-thought prompting (eg," Let's think step-by-step") primes large language models to
verbalize rationalization for their predictions. While chain-of-thought can lead to dramatic …
verbalize rationalization for their predictions. While chain-of-thought can lead to dramatic …
Flex: Unifying evaluation for few-shot nlp
Few-shot NLP research is highly active, yet conducted in disjoint research threads with
evaluation suites that lack challenging-yet-realistic testing setups and fail to employ careful …
evaluation suites that lack challenging-yet-realistic testing setups and fail to employ careful …
Local interpretations for explainable natural language processing: A survey
As the use of deep learning techniques has grown across various fields over the past
decade, complaints about the opaqueness of the black-box models have increased …
decade, complaints about the opaqueness of the black-box models have increased …