Large language models are zero-shot fuzzers: Fuzzing deep-learning libraries via large language models

Y Deng, CS **a, H Peng, C Yang, L Zhang - Proceedings of the 32nd …, 2023 - dl.acm.org
Deep Learning (DL) systems have received exponential growth in popularity and have
become ubiquitous in our everyday life. Such systems are built on top of popular DL …

Program synthesis with large language models

J Austin, A Odena, M Nye, M Bosma… - arxiv preprint arxiv …, 2021 - arxiv.org
This paper explores the limits of the current generation of large language models for
program synthesis in general purpose programming languages. We evaluate a collection of …

Hypothesis search: Inductive reasoning with language models

R Wang, E Zelikman, G Poesia, Y Pu, N Haber… - arxiv preprint arxiv …, 2023 - arxiv.org
Inductive reasoning is a core problem-solving capacity: humans can identify underlying
principles from a few examples, which robustly generalize to novel scenarios. Recent work …

Parsel🐍: Algorithmic Reasoning with Language Models by Composing Decompositions

E Zelikman, Q Huang, G Poesia… - Advances in …, 2023 - proceedings.neurips.cc
Despite recent success in large language model (LLM) reasoning, LLMs struggle with
hierarchical multi-step reasoning tasks like generating complex programs. For these tasks …

Lego: Latent execution-guided reasoning for multi-hop question answering on knowledge graphs

H Ren, H Dai, B Dai, X Chen… - International …, 2021 - proceedings.mlr.press
Answering complex natural language questions on knowledge graphs (KGQA) is a
challenging task. It requires reasoning with the input natural language questions as well as …

SmBoP: Semi-autoregressive bottom-up semantic parsing

O Rubin, J Berant - arxiv preprint arxiv:2010.12412, 2020 - arxiv.org
The de-facto standard decoding method for semantic parsing in recent years has been to
autoregressively decode the abstract syntax tree of the target program using a top-down …

Symbolic metaprogram search improves learning efficiency and explains rule learning in humans

JS Rule, ST Piantadosi, A Cropper, K Ellis… - Nature …, 2024 - nature.com
Throughout their lives, humans seem to learn a variety of rules for things like applying
category labels, following procedures, and explaining causal relationships. These rules are …

Hysynth: Context-free llm approximation for guiding program synthesis

S Barke, E Anaya Gonzalez… - Advances in …, 2025 - proceedings.neurips.cc
Many structured prediction and reasoning tasks can be framed as program synthesis
problems, where the goal is to generate a program in a\emph {domain-specific …

Reasoning like program executors

X Pi, Q Liu, B Chen, M Ziyadi, Z Lin, Q Fu, Y Gao… - arxiv preprint arxiv …, 2022 - arxiv.org
Reasoning over natural language is a long-standing goal for the research community.
However, studies have shown that existing language models are inadequate in reasoning …

Programmatic reinforcement learning without oracles

W Qiu, H Zhu - The Tenth International Conference on Learning …, 2022 - par.nsf.gov
Deep reinforcement learning (RL) has led to encouraging successes in many challenging
control tasks. However, a deep RL model lacks interpretability due to the difficulty of …