Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Knowledge editing for large language models: A survey
Large Language Models (LLMs) have recently transformed both the academic and industrial
landscapes due to their remarkable capacity to understand, analyze, and generate texts …
landscapes due to their remarkable capacity to understand, analyze, and generate texts …
A review on language models as knowledge bases
Recently, there has been a surge of interest in the NLP community on the use of pretrained
Language Models (LMs) as Knowledge Bases (KBs). Researchers have shown that LMs …
Language Models (LMs) as Knowledge Bases (KBs). Researchers have shown that LMs …
Glm-130b: An open bilingual pre-trained model
We introduce GLM-130B, a bilingual (English and Chinese) pre-trained language model
with 130 billion parameters. It is an attempt to open-source a 100B-scale model at least as …
with 130 billion parameters. It is an attempt to open-source a 100B-scale model at least as …
Editing large language models: Problems, methods, and opportunities
Despite the ability to train capable LLMs, the methodology for maintaining their relevancy
and rectifying errors remains elusive. To this end, the past few years have witnessed a surge …
and rectifying errors remains elusive. To this end, the past few years have witnessed a surge …
Locating and editing factual associations in gpt
We analyze the storage and recall of factual associations in autoregressive transformer
language models, finding evidence that these associations correspond to localized, directly …
language models, finding evidence that these associations correspond to localized, directly …
Mquake: Assessing knowledge editing in language models via multi-hop questions
The information stored in large language models (LLMs) falls out of date quickly, and
retraining from scratch is often not an option. This has recently given rise to a range of …
retraining from scratch is often not an option. This has recently given rise to a range of …
On the opportunities and risks of foundation models
AI is undergoing a paradigm shift with the rise of models (eg, BERT, DALL-E, GPT-3) that are
trained on broad data at scale and are adaptable to a wide range of downstream tasks. We …
trained on broad data at scale and are adaptable to a wide range of downstream tasks. We …
Does localization inform editing? surprising differences in causality-based localization vs. knowledge editing in language models
Abstract Language models learn a great quantity of factual information during pretraining,
and recent work localizes this information to specific model weights like mid-layer MLP …
and recent work localizes this information to specific model weights like mid-layer MLP …
Red teaming chatgpt via jailbreaking: Bias, robustness, reliability and toxicity
Recent breakthroughs in natural language processing (NLP) have permitted the synthesis
and comprehension of coherent text in an open-ended way, therefore translating the …
and comprehension of coherent text in an open-ended way, therefore translating the …
Aging with grace: Lifelong model editing with discrete key-value adaptors
Deployed language models decay over time due to shifting inputs, changing user needs, or
emergent world-knowledge gaps. When such problems are identified, we want to make …
emergent world-knowledge gaps. When such problems are identified, we want to make …