Should we really edit language models? on the evaluation of edited language models

Q Li, X Liu, Z Tang, P Dong, Z Li… - Advances in Neural …, 2025‏ - proceedings.neurips.cc
Abstract Model editing has become an increasingly popular alternative for efficiently
updating knowledge within language models. Current methods mainly focus on reliability …

Memla: Enhancing multilingual knowledge editing with neuron-masked low-rank adaptation

J **e, P Cao, Y Chen, Y Chen, K Liu, J Zhao - arxiv preprint arxiv …, 2024‏ - arxiv.org
Knowledge editing aims to adjust the knowledge within large language models (LLMs) to
prevent their responses from becoming obsolete or inaccurate. However, existing works on …

Knowledge localization: Mission not accomplished? enter query localization!

Y Chen, P Cao, Y Chen, K Liu, J Zhao - arxiv preprint arxiv:2405.14117, 2024‏ - arxiv.org
Large language models (LLMs) store extensive factual knowledge, but the mechanisms
behind how they store and express this knowledge remain unclear. The Knowledge Neuron …

Bring Your Own Knowledge: A Survey of Methods for LLM Knowledge Expansion

M Wang, A Stoll, L Lange, H Adel, H Schütze… - arxiv preprint arxiv …, 2025‏ - arxiv.org
Adapting large language models (LLMs) to new and diverse knowledge is essential for their
lasting effectiveness in real-world applications. This survey provides an overview of state-of …