Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
From task structures to world models: what do LLMs know?
In what sense does a large language model (LLM) have knowledge? We answer by
granting LLMs 'instrumental knowledge': knowledge gained by using next-word generation …
granting LLMs 'instrumental knowledge': knowledge gained by using next-word generation …
Tree of thoughts: Deliberate problem solving with large language models
S Yao, D Yu, J Zhao, I Shafran… - Advances in neural …, 2023 - proceedings.neurips.cc
Abstract Language models are increasingly being deployed for general problem solving
across a wide range of tasks, but are still confined to token-level, left-to-right decision …
across a wide range of tasks, but are still confined to token-level, left-to-right decision …
Controllable text generation for large language models: A survey
In Natural Language Processing (NLP), Large Language Models (LLMs) have demonstrated
high text generation quality. However, in real-world applications, LLMs must meet …
high text generation quality. However, in real-world applications, LLMs must meet …
Planning with large language models for code generation
Existing large language model-based code generation pipelines typically use beam search
or sampling algorithms during the decoding process. Although the programs they generate …
or sampling algorithms during the decoding process. Although the programs they generate …
A contrastive framework for neural text generation
Text generation is of great importance to many natural language processing applications.
However, maximization-based decoding methods (eg, beam search) of neural language …
However, maximization-based decoding methods (eg, beam search) of neural language …
Internet-augmented language models through few-shot prompting for open-domain question answering
In this work, we aim to capitalize on the unique few-shot capabilities of large-scale language
models (LSLMs) to overcome some of their challenges with respect to grounding to factual …
models (LSLMs) to overcome some of their challenges with respect to grounding to factual …
Controlled text generation with natural language instructions
Large language models can be prompted to pro-duce fluent output for a wide range of tasks
without being specifically trained to do so. Nevertheless, it is notoriously difficult to control …
without being specifically trained to do so. Nevertheless, it is notoriously difficult to control …
Controlled decoding from language models
KL-regularized reinforcement learning (RL) is a popular alignment framework to control the
language model responses towards high reward outcomes. We pose a tokenwise RL …
language model responses towards high reward outcomes. We pose a tokenwise RL …
Break the sequential dependency of llm inference using lookahead decoding
Autoregressive decoding of large language models (LLMs) is memory bandwidth bounded,
resulting in high latency and significant wastes of the parallel processing power of modern …
resulting in high latency and significant wastes of the parallel processing power of modern …
Branch-solve-merge improves large language model evaluation and generation
Large Language Models (LLMs) are frequently used for multi-faceted language generation
and evaluation tasks that involve satisfying intricate user constraints or taking into account …
and evaluation tasks that involve satisfying intricate user constraints or taking into account …