Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Probing classifiers: Promises, shortcomings, and advances
Y Belinkov - Computational Linguistics, 2022 - direct.mit.edu
Probing classifiers have emerged as one of the prominent methodologies for interpreting
and analyzing deep neural network models of natural language processing. The basic idea …
and analyzing deep neural network models of natural language processing. The basic idea …
Pre-trained models for natural language processing: A survey
Recently, the emergence of pre-trained models (PTMs) has brought natural language
processing (NLP) to a new era. In this survey, we provide a comprehensive review of PTMs …
processing (NLP) to a new era. In this survey, we provide a comprehensive review of PTMs …
Bloom: A 176b-parameter open-access multilingual language model
Large language models (LLMs) have been shown to be able to perform new tasks based on
a few demonstrations or natural language instructions. While these capabilities have led to …
a few demonstrations or natural language instructions. While these capabilities have led to …
Locating and editing factual associations in gpt
We analyze the storage and recall of factual associations in autoregressive transformer
language models, finding evidence that these associations correspond to localized, directly …
language models, finding evidence that these associations correspond to localized, directly …
Unlearn what you want to forget: Efficient unlearning for llms
Large language models (LLMs) have achieved significant progress from pre-training on and
memorizing a wide range of textual data, however, this process might suffer from privacy …
memorizing a wide range of textual data, however, this process might suffer from privacy …
Fine-tuning can distort pretrained features and underperform out-of-distribution
When transferring a pretrained model to a downstream task, two popular methods are full
fine-tuning (updating all the model parameters) and linear probing (updating only the last …
fine-tuning (updating all the model parameters) and linear probing (updating only the last …
On the opportunities and risks of foundation models
AI is undergoing a paradigm shift with the rise of models (eg, BERT, DALL-E, GPT-3) that are
trained on broad data at scale and are adaptable to a wide range of downstream tasks. We …
trained on broad data at scale and are adaptable to a wide range of downstream tasks. We …
Fast model editing at scale
While large pre-trained models have enabled impressive results on a variety of downstream
tasks, the largest existing models still make errors, and even accurate predictions may …
tasks, the largest existing models still make errors, and even accurate predictions may …
Factual probing is [mask]: Learning vs. learning to recall
Petroni et al.(2019) demonstrated that it is possible to retrieve world facts from a pre-trained
language model by expressing them as cloze-style prompts and interpret the model's …
language model by expressing them as cloze-style prompts and interpret the model's …
How can we know what language models know?
Recent work has presented intriguing results examining the knowledge contained in
language models (LMs) by having the LM fill in the blanks of prompts such as “Obama is a …
language models (LMs) by having the LM fill in the blanks of prompts such as “Obama is a …