Authors: Jan Ludziejewski, Maciej Pióro, Jakub Krajewski, Maciej Stefaniak, Michał Krutul, Jan Małaśnicki, Marek Cygan, Piotr Sankowski, Kamil Adamczewski, Piotr Miłoś, Sebastian Jaszczur
Abstract: Mixture of Experts (MoE) architectures have significantly increased
computational efficiency in both research and real-world applications of
large-scale machine learning models. However, their scalability and efficiency
under memory constraints remain relatively underexplored. In this work, we
present joint scaling laws for dense and MoE models, incorporating key factors
such as the number of active parameters, dataset size, and the number of
experts. Our findings provide a principled framework for selecting the optimal
MoE configuration under fixed memory and compute budgets. Surprisingly, we show
that MoE models can be more memory-efficient than dense models, contradicting
conventional wisdom. To derive and validate the theoretical predictions of our
scaling laws, we conduct over 280 experiments with up to 2.7B active parameters
and up to 5B total parameters. These results offer actionable insights for
designing and deploying MoE models in practical large-scale training scenarios.
Source: http://arxiv.org/abs/2502.05172v1