Knowledge editing for large language models: A survey
Large Language Models (LLMs) have recently transformed both the academic and industrial
landscapes due to their remarkable capacity to understand, analyze, and generate texts …
landscapes due to their remarkable capacity to understand, analyze, and generate texts …
A review on language models as knowledge bases
Recently, there has been a surge of interest in the NLP community on the use of pretrained
Language Models (LMs) as Knowledge Bases (KBs). Researchers have shown that LMs …
Language Models (LMs) as Knowledge Bases (KBs). Researchers have shown that LMs …
On the opportunities and risks of foundation models
AI is undergoing a paradigm shift with the rise of models (eg, BERT, DALL-E, GPT-3) that are
trained on broad data at scale and are adaptable to a wide range of downstream tasks. We …
trained on broad data at scale and are adaptable to a wide range of downstream tasks. We …
Locating and editing factual associations in GPT
We analyze the storage and recall of factual associations in autoregressive transformer
language models, finding evidence that these associations correspond to localized, directly …
language models, finding evidence that these associations correspond to localized, directly …
Glm-130b: An open bilingual pre-trained model
We introduce GLM-130B, a bilingual (English and Chinese) pre-trained language model
with 130 billion parameters. It is an attempt to open-source a 100B-scale model at least as …
with 130 billion parameters. It is an attempt to open-source a 100B-scale model at least as …
Editing large language models: Problems, methods, and opportunities
Despite the ability to train capable LLMs, the methodology for maintaining their relevancy
and rectifying errors remains elusive. To this end, the past few years have witnessed a surge …
and rectifying errors remains elusive. To this end, the past few years have witnessed a surge …
Does localization inform editing? surprising differences in causality-based localization vs. knowledge editing in language models
Abstract Language models learn a great quantity of factual information during pretraining,
and recent work localizes this information to specific model weights like mid-layer MLP …
and recent work localizes this information to specific model weights like mid-layer MLP …
Editing factual knowledge in language models
The factual knowledge acquired during pre-training and stored in the parameters of
Language Models (LMs) can be useful in downstream tasks (eg, question answering or …
Language Models (LMs) can be useful in downstream tasks (eg, question answering or …
Mquake: Assessing knowledge editing in language models via multi-hop questions
The information stored in large language models (LLMs) falls out of date quickly, and
retraining from scratch is often not an option. This has recently given rise to a range of …
retraining from scratch is often not an option. This has recently given rise to a range of …
Time-aware language models as temporal knowledge bases
Many facts come with an expiration date, from the name of the President to the basketball
team Lebron James plays for. However, most language models (LMs) are trained on …
team Lebron James plays for. However, most language models (LMs) are trained on …