BLAST: Block-Level Adaptive Structured Matrices for Efficient Deep Neural Network Inference

Authors: Changwoo Lee, Soo Min Kwon, Qing Qu, Hun-Seok Kim

Abstract: Large-scale foundation models have demonstrated exceptional performance in
language and vision tasks. However, the numerous dense matrix-vector operations
involved in these large networks pose significant computational challenges
during inference. To address these challenges, we introduce the Block-Level
Adaptive STructured (BLAST) matrix, designed to learn and leverage efficient
structures prevalent in the weight matrices of linear layers within deep
learning models. Compared to existing structured matrices, the BLAST matrix
offers substantial flexibility, as it can represent various types of structures
that are either learned from data or computed from pre-existing weight
matrices. We demonstrate the efficiency of using the BLAST matrix for
compressing both language and vision tasks, showing that (i) for medium-sized
models such as ViT and GPT-2, training with BLAST weights boosts performance
while reducing complexity by 70\% and 40\%, respectively; and (ii) for large
foundation models such as Llama-7B and DiT-XL, the BLAST matrix achieves a 2x
compression while exhibiting the lowest performance degradation among all
tested structured matrices. Our code is available at
\url{https://github.com/changwoolee/BLAST}.

Source: http://arxiv.org/abs/2410.21262v1

About the Author

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like these