Authors: William Chen, Jinchuan Tian, Yifan Peng, Brian Yan, Chao-Han Huck Yang, Shinji Watanabe
Abstract: Neural scaling laws offer valuable insights for designing robust sequence
processing architectures. While these laws have been extensively characterized
in other modalities, their behavior in speech remains comparatively
underexplored. In this work, we introduce OWLS, an open-access, reproducible
suite of multilingual speech recognition and translation models spanning 0.25B
to 18B parameters, with the 18B version being the largest speech model, to the
best of our knowledge. OWLS leverages up to 360K hours of public speech data
across 150 languages, enabling a systematic investigation into how data, model,
and compute scaling each influence performance in multilingual speech tasks. We
use OWLS to derive neural scaling laws, showing how final performance can be
reliably predicted when scaling. One of our key findings is that scaling
enhances performance on low-resource languages/dialects, helping to mitigate
bias and improve the accessibility of speech technologies. Finally, we show how
OWLS can be used to power new research directions by discovering emergent
abilities in large-scale speech models. Model checkpoints will be released on
https://huggingface.co/collections/espnet/owls-scaling-laws-for-speech-recognition-and-translation-67ab7f991c194065f057ce8d
for future studies.
Source: http://arxiv.org/abs/2502.10373v1