Mission Impossible: A Statistical Perspective on Jailbreaking LLMs

Authors: Jingtong Su, Julia Kempe, Karen Ullrich

Abstract: Large language models (LLMs) are trained on a deluge of text data with
limited quality control. As a result, LLMs can exhibit unintended or even
harmful behaviours, such as leaking information, fake news or hate speech.
Countermeasures, commonly referred to as preference alignment, include
fine-tuning the pretrained LLMs with carefully crafted text examples of desired
behaviour. Even then, empirical evidence shows preference aligned LLMs can be
enticed to harmful behaviour. This so called jailbreaking of LLMs is typically
achieved by adversarially modifying the input prompt to the LLM. Our paper
provides theoretical insights into the phenomenon of preference alignment and
jailbreaking from a statistical perspective. Under our framework, we first show
that pretrained LLMs will mimic harmful behaviour if present in the training
corpus. Under that same framework, we then introduce a statistical notion of
alignment, and lower-bound the jailbreaking probability, showing that it is
unpreventable under reasonable assumptions. Based on our insights, we propose
an alteration to the currently prevalent alignment strategy RLHF. Specifically,
we introduce a simple modification to the RLHF objective, we call E-RLHF, that
aims to increase the likelihood of safe responses. E-RLHF brings no additional
training cost, and is compatible with other methods. Empirically, we
demonstrate that E-RLHF outperforms RLHF on all alignment problems put forward
by the AdvBench and HarmBench project without sacrificing model performance as
measured by the MT-Bench project.

Source: http://arxiv.org/abs/2408.01420v1

About the Author

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like these