SepLLM: Accelerate Large Language Models by Compressing One Segment into One Separator

Authors: Guoxuan Chen, Han Shi, Jiawei Li, Yihang Gao, Xiaozhe Ren, Yimeng Chen, Xin Jiang, Zhenguo Li, Weiyang Liu, Chao Huang

Abstract: Large Language Models (LLMs) have exhibited exceptional performance across a
spectrum of natural language processing tasks. However, their substantial sizes
pose considerable challenges, particularly in computational demands and
inference speed, due to their quadratic complexity. In this work, we have
identified a key pattern: certain seemingly meaningless special tokens (i.e.,
separators) contribute disproportionately to attention scores compared to
semantically meaningful tokens. This observation suggests that information of
the segments between these separator tokens can be effectively condensed into
the separator tokens themselves without significant information loss. Guided by
this insight, we introduce SepLLM, a plug-and-play framework that accelerates
inference by compressing these segments and eliminating redundant tokens.
Additionally, we implement efficient kernels for training acceleration.
Experimental results across training-free, training-from-scratch, and
post-training settings demonstrate SepLLM’s effectiveness. Notably, using the
Llama-3-8B backbone, SepLLM achieves over 50% reduction in KV cache on the
GSM8K-CoT benchmark while maintaining comparable performance. Furthermore, in
streaming settings, SepLLM effectively processes sequences of up to 4 million
tokens or more while maintaining consistent language modeling capabilities.

Source: http://arxiv.org/abs/2412.12094v1

About the Author

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like these