ThinK: Thinner Key Cache by Query-Driven Pruning

Authors: Yuhui Xu, Zhanming Jie, Hanze Dong, Lei Wang, Xudong Lu, Aojun Zhou, Amrita Saha, Caiming Xiong, Doyen Sahoo

Abstract: Large Language Models (LLMs) have revolutionized the field of natural
language processing, achieving unprecedented performance across a variety of
applications by leveraging increased model sizes and sequence lengths. However,
the associated rise in computational and memory costs poses significant
challenges, particularly in managing long sequences due to the quadratic
complexity of the transformer attention mechanism. This paper focuses on the
long-context scenario, addressing the inefficiencies in KV cache memory
consumption during inference. Unlike existing approaches that optimize the
memory based on the sequence lengths, we uncover that the channel dimension of
the KV cache exhibits significant redundancy, characterized by unbalanced
magnitude distribution and low-rank structure in attention weights. Based on
these observations, we propose ThinK, a novel query-dependent KV cache pruning
method designed to minimize attention weight loss while selectively pruning the
least significant channels. Our approach not only maintains or enhances model
accuracy but also achieves a reduction in memory costs by over 20% compared
with vanilla KV cache eviction methods. Extensive evaluations on the LLaMA3 and
Mistral models across various long-sequence datasets confirm the efficacy of
ThinK, setting a new precedent for efficient LLM deployment without
compromising performance. We also outline the potential of extending our method
to value cache pruning, demonstrating ThinK’s versatility and broad
applicability in reducing both memory and computational overheads.

Source: http://arxiv.org/abs/2407.21018v1

About the Author

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like these