Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Large language models are zero-shot fuzzers: Fuzzing deep-learning libraries via large language models
Deep Learning (DL) systems have received exponential growth in popularity and have
become ubiquitous in our everyday life. Such systems are built on top of popular DL …
become ubiquitous in our everyday life. Such systems are built on top of popular DL …
Program synthesis with large language models
This paper explores the limits of the current generation of large language models for
program synthesis in general purpose programming languages. We evaluate a collection of …
program synthesis in general purpose programming languages. We evaluate a collection of …
Hypothesis search: Inductive reasoning with language models
Inductive reasoning is a core problem-solving capacity: humans can identify underlying
principles from a few examples, which robustly generalize to novel scenarios. Recent work …
principles from a few examples, which robustly generalize to novel scenarios. Recent work …
Parsel🐍: Algorithmic Reasoning with Language Models by Composing Decompositions
Despite recent success in large language model (LLM) reasoning, LLMs struggle with
hierarchical multi-step reasoning tasks like generating complex programs. For these tasks …
hierarchical multi-step reasoning tasks like generating complex programs. For these tasks …
Lego: Latent execution-guided reasoning for multi-hop question answering on knowledge graphs
Answering complex natural language questions on knowledge graphs (KGQA) is a
challenging task. It requires reasoning with the input natural language questions as well as …
challenging task. It requires reasoning with the input natural language questions as well as …
SmBoP: Semi-autoregressive bottom-up semantic parsing
The de-facto standard decoding method for semantic parsing in recent years has been to
autoregressively decode the abstract syntax tree of the target program using a top-down …
autoregressively decode the abstract syntax tree of the target program using a top-down …
Symbolic metaprogram search improves learning efficiency and explains rule learning in humans
Throughout their lives, humans seem to learn a variety of rules for things like applying
category labels, following procedures, and explaining causal relationships. These rules are …
category labels, following procedures, and explaining causal relationships. These rules are …
Hysynth: Context-free llm approximation for guiding program synthesis
Many structured prediction and reasoning tasks can be framed as program synthesis
problems, where the goal is to generate a program in a\emph {domain-specific …
problems, where the goal is to generate a program in a\emph {domain-specific …
Reasoning like program executors
Reasoning over natural language is a long-standing goal for the research community.
However, studies have shown that existing language models are inadequate in reasoning …
However, studies have shown that existing language models are inadequate in reasoning …
Programmatic reinforcement learning without oracles
Deep reinforcement learning (RL) has led to encouraging successes in many challenging
control tasks. However, a deep RL model lacks interpretability due to the difficulty of …
control tasks. However, a deep RL model lacks interpretability due to the difficulty of …