Knowledge editing for large language models: A survey

S Wang, Y Zhu, H Liu, Z Zheng, C Chen, J Li - ACM Computing Surveys, 2024 - dl.acm.org
Large Language Models (LLMs) have recently transformed both the academic and industrial
landscapes due to their remarkable capacity to understand, analyze, and generate texts …

Can Editing LLMs Inject Harm?

C Chen, B Huang, Z Li, Z Chen, S Lai, X Xu… - arxiv preprint arxiv …, 2024 - arxiv.org
Knowledge editing has been increasingly adopted to correct the false or outdated
knowledge in Large Language Models (LLMs). Meanwhile, one critical but under-explored …

Leveraging logical rules in knowledge editing: A cherry on the top

K Cheng, MA Ali, S Yang, G Lin, Y Zhai, H Fei… - arxiv preprint arxiv …, 2024 - arxiv.org
Multi-hop Question Answering (MQA) under knowledge editing (KE) is a key challenge in
Large Language Models (LLMs). While best-performing solutions in this domain use a plan …

Retrieval-enhanced knowledge editing in language models for multi-hop question answering

Y Shi, Q Tan, X Wu, S Zhong, K Zhou… - Proceedings of the 33rd …, 2024 - dl.acm.org
Large Language Models (LLMs) have shown proficiency in question-answering tasks but
often struggle to integrate real-time knowledge, leading to potentially outdated or inaccurate …

Can Knowledge Editing Really Correct Hallucinations?

B Huang, C Chen, X Xu, A Payani, K Shu - arxiv preprint arxiv …, 2024 - arxiv.org
Large Language Models (LLMs) suffer from hallucinations, referring to the non-factual
information in generated content, despite their superior capacities across tasks. Meanwhile …

Should We Really Edit Language Models? On the Evaluation of Edited Language Models

Q Li, X Liu, Z Tang, P Dong, Z Li, X Pan… - arxiv preprint arxiv …, 2024 - arxiv.org
Model editing has become an increasingly popular alternative for efficiently updating
knowledge within language models. Current methods mainly focus on reliability …

MRKE: The Multi-hop Reasoning Evaluation of LLMs by Knowledge Edition

J Wu, L Yang, M Okumura, Y Zhang - arxiv preprint arxiv:2402.11924, 2024 - arxiv.org
Although Large Language Models (LLMs) have shown strong performance in Multi-hop
Question Answering (MHQA) tasks, their real reasoning ability remains exploration. Current …

Multi-hop question answering under temporal knowledge editing

K Cheng, G Lin, H Fei, L Yu, MA Ali, L Hu… - arxiv preprint arxiv …, 2024 - arxiv.org
Multi-hop question answering (MQA) under knowledge editing (KE) has garnered significant
attention in the era of large language models. However, existing models for MQA under KE …

GenDec: A robust generative Question-decomposition method for Multi-hop reasoning

J Wu, L Yang, Y Ji, W Huang, BF Karlsson… - arxiv preprint arxiv …, 2024 - arxiv.org
Multi-hop QA (MHQA) involves step-by-step reasoning to answer complex questions and
find multiple relevant supporting facts. However, Existing large language models'(LLMs) …

Can we continually edit language models? on the knowledge attenuation in sequential model editing

Q Li, X Chu - Findings of the Association for Computational …, 2024 - aclanthology.org
Abstract Model editing has become a promising method for precisely and effectively
updating knowledge in language models. In this paper, we investigate knowledge …