Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing
This article surveys and organizes research works in a new paradigm in natural language
processing, which we dub “prompt-based learning.” Unlike traditional supervised learning …
processing, which we dub “prompt-based learning.” Unlike traditional supervised learning …
A survey of confidence estimation and calibration in large language models
Large language models (LLMs) have demonstrated remarkable capabilities across a wide
range of tasks in various domains. Despite their impressive performance, they can be …
range of tasks in various domains. Despite their impressive performance, they can be …
Simpo: Simple preference optimization with a reference-free reward
Abstract Direct Preference Optimization (DPO) is a widely used offline preference
optimization algorithm that reparameterizes reward functions in reinforcement learning from …
optimization algorithm that reparameterizes reward functions in reinforcement learning from …
[PDF][PDF] Automatic chain of thought prompting in large language models
Large language models (LLMs) can perform complex reasoning by generating intermediate
reasoning steps. Providing these steps for prompting demonstrations is called chain-of …
reasoning steps. Providing these steps for prompting demonstrations is called chain-of …
Making language models better reasoners with step-aware verifier
Few-shot learning is a challenging task that requires language models to generalize from
limited examples. Large language models like GPT-3 and PaLM have made impressive …
limited examples. Large language models like GPT-3 and PaLM have made impressive …
Trusting your evidence: Hallucinate less with context-aware decoding
Abstract Language models (LMs) often struggle to pay enough attention to the input context,
and generate texts that are unfaithful or contain hallucinations. To mitigate this issue, we …
and generate texts that are unfaithful or contain hallucinations. To mitigate this issue, we …
Rethinking the role of demonstrations: What makes in-context learning work?
Large language models (LMs) are able to in-context learn--perform a new task via inference
alone by conditioning on a few input-label pairs (demonstrations) and making predictions for …
alone by conditioning on a few input-label pairs (demonstrations) and making predictions for …
Ask me anything: A simple strategy for prompting language models
Large language models (LLMs) transfer well to new tasks out-of-the-box simply given a
natural language prompt that demonstrates how to perform the task and no additional …
natural language prompt that demonstrates how to perform the task and no additional …
Learning to retrieve prompts for in-context learning
In-context learning is a recent paradigm in natural language understanding, where a large
pre-trained language model (LM) observes a test instance and a few training examples as …
pre-trained language model (LM) observes a test instance and a few training examples as …
Finetuned language models are zero-shot learners
This paper explores a simple method for improving the zero-shot learning abilities of
language models. We show that instruction tuning--finetuning language models on a …
language models. We show that instruction tuning--finetuning language models on a …