Mixture of Experts with Mixture of Precisions for Tuning Quality of Service

Authors: HamidReza Imani, Abdolah Amirany, Tarek El-Ghazawi

Abstract: The increasing demand for deploying large Mixture-of-Experts (MoE) models in
resource-constrained environments necessitates efficient approaches to address
their high memory and computational requirements challenges. Moreover, given
that tasks come in different user-defined constraints and the available
resources change over time in multi-tenant environments, it is necessary to
design an approach which provides a flexible configuration space. This paper
presents an adaptive serving approach for the efficient deployment of MoE
models, capitalizing on partial quantization of the experts. By dynamically
determining the number of quantized experts and their distribution across CPU
and GPU, our approach explores the Pareto frontier and offers a fine-grained
range of configurations for tuning throughput and model quality. Our evaluation
on an NVIDIA A100 GPU using a Mixtral 8x7B MoE model for three language
modelling benchmarks demonstrates that the throughput of token generation can
be adjusted from 0.63 to 13.00 token per second. This enhancement comes with a
marginal perplexity increase of 2.62 to 2.80, 6.48 to 7.24, and 3.24 to 3.53
for WikiText2, PTB, and C4 datasets respectively under maximum quantization.
These results highlight the practical applicability of our approach in dynamic
and accuracy-sensitive applications where both memory usage and output quality
are important.

Source: http://arxiv.org/abs/2407.14417v1

About the Author

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like these