A survey of deep learning for mathematical reasoning
Mathematical reasoning is a fundamental aspect of human intelligence and is applicable in
various fields, including science, engineering, finance, and everyday life. The development …
various fields, including science, engineering, finance, and everyday life. The development …
A review of large language models and autonomous agents in chemistry
Large language models (LLMs) have emerged as powerful tools in chemistry, significantly
impacting molecule design, property prediction, and synthesis optimization. This review …
impacting molecule design, property prediction, and synthesis optimization. This review …
Palm: Scaling language modeling with pathways
Large language models have been shown to achieve remarkable performance across a
variety of natural language tasks using few-shot learning, which drastically reduces the …
variety of natural language tasks using few-shot learning, which drastically reduces the …
Wizardmath: Empowering mathematical reasoning for large language models via reinforced evol-instruct
Large language models (LLMs), such as GPT-4, have shown remarkable performance in
natural language processing (NLP) tasks, including challenging mathematical reasoning …
natural language processing (NLP) tasks, including challenging mathematical reasoning …
Chain-of-thought prompting elicits reasoning in large language models
We explore how generating a chain of thought---a series of intermediate reasoning steps---
significantly improves the ability of large language models to perform complex reasoning. In …
significantly improves the ability of large language models to perform complex reasoning. In …
Active prompting with chain-of-thought for large language models
The increasing scale of large language models (LLMs) brings emergent abilities to various
complex tasks requiring reasoning, such as arithmetic and commonsense reasoning. It is …
complex tasks requiring reasoning, such as arithmetic and commonsense reasoning. It is …
Automatic prompt augmentation and selection with chain-of-thought from labeled data
Chain-of-thought prompting (CoT) advances the reasoning abilities of large language
models (LLMs) and achieves superior performance in arithmetic, commonsense, and …
models (LLMs) and achieves superior performance in arithmetic, commonsense, and …
Solving math word problems via cooperative reasoning induced language models
Large-scale pre-trained language models (PLMs) bring new opportunities to challenging
problems, especially those that need high-level intelligence, such as the math word problem …
problems, especially those that need high-level intelligence, such as the math word problem …
Learning to reason deductively: Math word problem solving as complex relation extraction
Solving math word problems requires deductive reasoning over the quantities in the text.
Various recent research efforts mostly relied on sequence-to-sequence or sequence-to-tree …
Various recent research efforts mostly relied on sequence-to-sequence or sequence-to-tree …
Let gpt be a math tutor: Teaching math word problem solvers with customized exercise generation
In this paper, we present a novel approach for distilling math word problem solving
capabilities from large language models (LLMs) into smaller, more efficient student models …
capabilities from large language models (LLMs) into smaller, more efficient student models …