[HTML][HTML] The child as hacker

JS Rule, JB Tenenbaum, ST Piantadosi - Trends in cognitive sciences, 2020 - cell.com
The scope of human learning and development poses a radical challenge for cognitive
science. We propose that developmental theories can address this challenge by adopting …

[HTML][HTML] From statistical relational to neurosymbolic artificial intelligence: A survey

G Marra, S Dumančić, R Manhaeve, L De Raedt - Artificial Intelligence, 2024 - Elsevier
This survey explores the integration of learning and reasoning in two different fields of
artificial intelligence: neurosymbolic and statistical relational artificial intelligence …

Voyager: An open-ended embodied agent with large language models

G Wang, Y **e, Y Jiang, A Mandlekar, C **ao… - arxiv preprint arxiv …, 2023 - arxiv.org
We introduce Voyager, the first LLM-powered embodied lifelong learning agent in Minecraft
that continuously explores the world, acquires diverse skills, and makes novel discoveries …

Faster sorting algorithms discovered using deep reinforcement learning

DJ Mankowitz, A Michi, A Zhernov, M Gelmi, M Selvi… - Nature, 2023 - nature.com
Fundamental algorithms such as sorting or hashing are used trillions of times on any given
day. As demand for computation grows, it has become critical for these algorithms to be as …

Coderl: Mastering code generation through pretrained models and deep reinforcement learning

H Le, Y Wang, AD Gotmare… - Advances in Neural …, 2022 - proceedings.neurips.cc
Program synthesis or code generation aims to generate a program that satisfies a problem
specification. Recent approaches using large-scale pretrained language models (LMs) have …

Codet: Code generation with generated tests

B Chen, F Zhang, A Nguyen, D Zan, Z Lin… - arxiv preprint arxiv …, 2022 - arxiv.org
The task of generating code solutions for a given programming problem can benefit from the
use of pre-trained language models such as Codex, which can produce multiple diverse …

Program synthesis with large language models

J Austin, A Odena, M Nye, M Bosma… - arxiv preprint arxiv …, 2021 - arxiv.org
This paper explores the limits of the current generation of large language models for
program synthesis in general purpose programming languages. We evaluate a collection of …

Can large language models reason about program invariants?

K Pei, D Bieber, K Shi, C Sutton… - … Conference on Machine …, 2023 - proceedings.mlr.press
Identifying invariants is an important program analysis task with applications towards
program understanding, bug finding, vulnerability analysis, and formal verification. Existing …

Planning with large language models for code generation

S Zhang, Z Chen, Y Shen, M Ding… - arxiv preprint arxiv …, 2023 - arxiv.org
Existing large language model-based code generation pipelines typically use beam search
or sampling algorithms during the decoding process. Although the programs they generate …

Identifying the risks of lm agents with an lm-emulated sandbox

Y Ruan, H Dong, A Wang, S Pitis, Y Zhou, J Ba… - arxiv preprint arxiv …, 2023 - arxiv.org
Recent advances in Language Model (LM) agents and tool use, exemplified by applications
like ChatGPT Plugins, enable a rich set of capabilities but also amplify potential risks-such …