Authors: Pinzheng Wang, Zecheng Tang, Keyan Zhou, Juntao Li, Qiaoming Zhu, Min Zhang
Abstract: Large Language Models have demonstrated superior performance across a wide
range of tasks, but they still exhibit undesirable errors due to incorrect
knowledge learned from the training data. To avoid this, knowledge editing
methods emerged to precisely edit the specific model knowledge via efficiently
modifying a very small percentage of parameters. % However, those methods can
lead to the problem of Specificity Failure: when the content related to the
edited knowledge occurs in the context, it can inadvertently corrupt other
pre-existing knowledge. However, those methods can lead to the problem of
Specificity Failure, where the existing knowledge and capabilities are severely
degraded due to editing. Our preliminary indicates that Specificity Failure
primarily stems from the model’s attention heads assigning excessive attention
scores to entities related to the edited knowledge, thereby unduly focusing on
specific snippets within the context, which we denote as the Attention Drift
phenomenon. To mitigate such Attention Drift issue, we introduce a simple yet
effective method Selective Attention Drift Restriction}(SADR), which introduces
an additional regularization term during the knowledge editing process to
restrict changes in the attention weight distribution, thereby preventing undue
focus on the edited entity. Experiments on five frequently used strong LLMs
demonstrate the effectiveness of our method, where SADR can significantly
mitigate Specificity Failure in the predominant knowledge editing tasks.
Source: http://arxiv.org/abs/2502.14838v1