Benchmarks for automated commonsense reasoning: A survey
E Davis - ACM Computing Surveys, 2023 - dl.acm.org
More than one hundred benchmarks have been developed to test the commonsense
knowledge and commonsense reasoning abilities of artificial intelligence (AI) systems …
knowledge and commonsense reasoning abilities of artificial intelligence (AI) systems …
Going beyond xai: A systematic survey for explanation-guided learning
As the societal impact of Deep Neural Networks (DNNs) grows, the goals for advancing
DNNs become more complex and diverse, ranging from improving a conventional model …
DNNs become more complex and diverse, ranging from improving a conventional model …
Multimodal learning with transformers: A survey
Transformer is a promising neural network learner, and has achieved great success in
various machine learning tasks. Thanks to the recent prevalence of multimodal applications …
various machine learning tasks. Thanks to the recent prevalence of multimodal applications …
Large language models are visual reasoning coordinators
Visual reasoning requires multimodal perception and commonsense cognition of the world.
Recently, multiple vision-language models (VLMs) have been proposed with excellent …
Recently, multiple vision-language models (VLMs) have been proposed with excellent …
Language models are general-purpose interfaces
Foundation models have received much attention due to their effectiveness across a broad
range of downstream applications. Though there is a big convergence in terms of …
range of downstream applications. Though there is a big convergence in terms of …
Symbolic chain-of-thought distillation: Small models can also" think" step-by-step
Chain-of-thought prompting (eg," Let's think step-by-step") primes large language models to
verbalize rationalization for their predictions. While chain-of-thought can lead to dramatic …
verbalize rationalization for their predictions. While chain-of-thought can lead to dramatic …
Reframing human-AI collaboration for generating free-text explanations
Large language models are increasingly capable of generating fluent-appearing text with
relatively little task-specific supervision. But can these models accurately explain …
relatively little task-specific supervision. But can these models accurately explain …
Explanations from large language models make small reasoners better
Integrating free-text explanations to in-context learning of large language models (LLM) is
shown to elicit strong reasoning capabilities along with reasonable explanations. In this …
shown to elicit strong reasoning capabilities along with reasonable explanations. In this …
MIT: A Large-Scale Dataset towards Multi-Modal Multilingual Instruction Tuning
Instruction tuning has significantly advanced large language models (LLMs) such as
ChatGPT, enabling them to align with human instructions across diverse tasks. However …
ChatGPT, enabling them to align with human instructions across diverse tasks. However …
A survey of multimodal large language model from a data-centric perspective
Multimodal large language models (MLLMs) enhance the capabilities of standard large
language models by integrating and processing data from multiple modalities, including text …
language models by integrating and processing data from multiple modalities, including text …