SEAS: Self-Evolving Adversarial Safety Optimization for Large Language Models

Authors: Muxi Diao, Rumei Li, Shiyang Liu, Guogang Liao, Jingang Wang, Xunliang Cai, Weiran Xu

Abstract: As large language models (LLMs) continue to advance in capability and
influence, ensuring their security and preventing harmful outputs has become
crucial. A promising approach to address these concerns involves training
models to automatically generate adversarial prompts for red teaming. However,
the evolving subtlety of vulnerabilities in LLMs challenges the effectiveness
of current adversarial methods, which struggle to specifically target and
explore the weaknesses of these models. To tackle these challenges, we
introduce the $\mathbf{S}\text{elf-}\mathbf{E}\text{volving
}\mathbf{A}\text{dversarial }\mathbf{S}\text{afety }\mathbf{(SEAS)}$
optimization framework, which enhances security by leveraging data generated by
the model itself. SEAS operates through three iterative stages: Initialization,
Attack, and Adversarial Optimization, refining both the Red Team and Target
models to improve robustness and safety. This framework reduces reliance on
manual testing and significantly enhances the security capabilities of LLMs.
Our contributions include a novel adversarial framework, a comprehensive safety
dataset, and after three iterations, the Target model achieves a security level
comparable to GPT-4, while the Red Team model shows a marked increase in attack
success rate (ASR) against advanced models.

Source: http://arxiv.org/abs/2408.02632v1

About the Author

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like these