Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Probing classifiers: Promises, shortcomings, and advances
Y Belinkov - Computational Linguistics, 2022 - direct.mit.edu
Probing classifiers have emerged as one of the prominent methodologies for interpreting
and analyzing deep neural network models of natural language processing. The basic idea …
and analyzing deep neural network models of natural language processing. The basic idea …
Post-hoc interpretability for neural nlp: A survey
Neural networks for NLP are becoming increasingly complex and widespread, and there is a
growing concern if these models are responsible to use. Explaining models helps to address …
growing concern if these models are responsible to use. Explaining models helps to address …
Bloom: A 176b-parameter open-access multilingual language model
Large language models (LLMs) have been shown to be able to perform new tasks based on
a few demonstrations or natural language instructions. While these capabilities have led to …
a few demonstrations or natural language instructions. While these capabilities have led to …
Emergent world representations: Exploring a sequence model trained on a synthetic task
Language models show a surprising range of capabilities, but the source of their apparent
competence is unclear. Do these networks just memorize a collection of surface statistics, or …
competence is unclear. Do these networks just memorize a collection of surface statistics, or …
Locating and editing factual associations in gpt
We analyze the storage and recall of factual associations in autoregressive transformer
language models, finding evidence that these associations correspond to localized, directly …
language models, finding evidence that these associations correspond to localized, directly …
On the opportunities and risks of foundation models
AI is undergoing a paradigm shift with the rise of models (eg, BERT, DALL-E, GPT-3) that are
trained on broad data at scale and are adaptable to a wide range of downstream tasks. We …
trained on broad data at scale and are adaptable to a wide range of downstream tasks. We …
Do vision transformers see like convolutional neural networks?
Convolutional neural networks (CNNs) have so far been the de-facto model for visual data.
Recent work has shown that (Vision) Transformer models (ViT) can achieve comparable or …
Recent work has shown that (Vision) Transformer models (ViT) can achieve comparable or …
Black-box access is insufficient for rigorous ai audits
External audits of AI systems are increasingly recognized as a key mechanism for AI
governance. The effectiveness of an audit, however, depends on the degree of access …
governance. The effectiveness of an audit, however, depends on the degree of access …
Fast model editing at scale
While large pre-trained models have enabled impressive results on a variety of downstream
tasks, the largest existing models still make errors, and even accurate predictions may …
tasks, the largest existing models still make errors, and even accurate predictions may …
Rethinking interpretability in the era of large language models
Interpretable machine learning has exploded as an area of interest over the last decade,
sparked by the rise of increasingly large datasets and deep neural networks …
sparked by the rise of increasingly large datasets and deep neural networks …