Authors: Bruno Mlodozeniec, Runa Eschenhagen, Juhan Bae, Alexander Immer, David Krueger, Richard Turner
Abstract: Diffusion models have led to significant advancements in generative
modelling. Yet their widespread adoption poses challenges regarding data
attribution and interpretability. In this paper, we aim to help address such
challenges in diffusion models by developing an \textit{influence functions}
framework. Influence function-based data attribution methods approximate how a
model’s output would have changed if some training data were removed. In
supervised learning, this is usually used for predicting how the loss on a
particular example would change. For diffusion models, we focus on predicting
the change in the probability of generating a particular example via several
proxy measurements. We show how to formulate influence functions for such
quantities and how previously proposed methods can be interpreted as particular
design choices in our framework. To ensure scalability of the Hessian
computations in influence functions, we systematically develop K-FAC
approximations based on generalised Gauss-Newton matrices specifically tailored
to diffusion models. We recast previously proposed methods as specific design
choices in our framework and show that our recommended method outperforms
previous data attribution approaches on common evaluations, such as the Linear
Data-modelling Score (LDS) or retraining without top influences, without the
need for method-specific hyperparameter tuning.
Source: http://arxiv.org/abs/2410.13850v1