Deep learning for source code modeling and generation: Models, applications, and challenges
Deep Learning (DL) techniques for Natural Language Processing have been evolving
remarkably fast. Recently, the DL advances in language modeling, machine translation, and …
remarkably fast. Recently, the DL advances in language modeling, machine translation, and …
Wizardmath: Empowering mathematical reasoning for large language models via reinforced evol-instruct
Large language models (LLMs), such as GPT-4, have shown remarkable performance in
natural language processing (NLP) tasks, including challenging mathematical reasoning …
natural language processing (NLP) tasks, including challenging mathematical reasoning …
Virtualhome: Simulating household activities via programs
In this paper, we are interested in modeling complex activities that occur in a typical
household. We propose to use programs, ie, sequences of atomic actions and interactions …
household. We propose to use programs, ie, sequences of atomic actions and interactions …
Leveraging grammar and reinforcement learning for neural program synthesis
Program synthesis is the task of automatically generating a program consistent with a
specification. Recent years have seen proposal of a number of neural approaches for …
specification. Recent years have seen proposal of a number of neural approaches for …
Recent advances in leveraging human guidance for sequential decision-making tasks
A longstanding goal of artificial intelligence is to create artificial agents capable of learning
to perform tasks that require sequential decision making. Importantly, while it is the artificial …
to perform tasks that require sequential decision making. Importantly, while it is the artificial …
Execution-guided neural program synthesis
Neural program synthesis from input-output examples has attracted an increasing interest
from both the machine learning and the programming language community. Most existing …
from both the machine learning and the programming language community. Most existing …
Neural program meta-induction
Most recently proposed methods for Neural Program induction work under the assumption of
having a large set of input/output (I/O) examples for learning any given input-output …
having a large set of input/output (I/O) examples for learning any given input-output …
Recursion of thought: A divide-and-conquer approach to multi-context reasoning with language models
Generating intermediate steps, or Chain of Thought (CoT), is an effective way to significantly
improve language models'(LM) multi-step reasoning capability. However, the CoT lengths …
improve language models'(LM) multi-step reasoning capability. However, the CoT lengths …
Complex program induction for querying knowledge bases in the absence of gold programs
Recent years have seen increasingly complex question-answering on knowledge bases
(KBQA) involving logical, quantitative, and comparative reasoning over KB subgraphs …
(KBQA) involving logical, quantitative, and comparative reasoning over KB subgraphs …
On-the-fly operation batching in dynamic computation graphs
Dynamic neural networks toolkits such as PyTorch, DyNet, and Chainer offer more flexibility
for implementing models that cope with data of varying dimensions and structure, relative to …
for implementing models that cope with data of varying dimensions and structure, relative to …