Authors: Itamar Pres, Laura Ruis, Ekdeep Singh Lubana, David Krueger
Abstract: Representation engineering methods have recently shown promise for enabling
efficient steering of model behavior. However, evaluation pipelines for these
methods have primarily relied on subjective demonstrations, instead of
quantitative, objective metrics. We aim to take a step towards addressing this
issue by advocating for four properties missing from current evaluations: (i)
contexts sufficiently similar to downstream tasks should be used for assessing
intervention quality; (ii) model likelihoods should be accounted for; (iii)
evaluations should allow for standardized comparisons across different target
behaviors; and (iv) baseline comparisons should be offered. We introduce an
evaluation pipeline grounded in these criteria, offering both a quantitative
and visual analysis of how effectively a given method works. We use this
pipeline to evaluate two representation engineering methods on how effectively
they can steer behaviors such as truthfulness and corrigibility, finding that
some interventions are less effective than previously reported.
Source: http://arxiv.org/abs/2410.17245v1