Authors: Pit Neitemeier, Björn Deiseroth, Constantin Eichenberg, Lukas Balles
Abstract: Tokenization is a fundamental step in natural language processing, breaking
text into units that computational models can process. While learned subword
tokenizers have become the de-facto standard, they present challenges such as
large vocabularies, limited adaptability to new domains or languages, and
sensitivity to spelling errors and variations. To overcome these limitations,
we investigate a hierarchical architecture for autoregressive language
modelling that combines character-level and word-level processing. It employs a
lightweight character-level encoder to convert character sequences into word
embeddings, which are then processed by a word-level backbone model and decoded
back into characters via a compact character-level decoder. This method retains
the sequence compression benefits of word-level tokenization without relying on
a rigid, predefined vocabulary. We demonstrate, at scales up to 7 billion
parameters, that hierarchical transformers match the downstream task
performance of subword-tokenizer-based models while exhibiting significantly
greater robustness to input perturbations. Additionally, during continued
pretraining on an out-of-domain language, our model trains almost twice as
fast, achieves superior performance on the target language, and retains more of
its previously learned knowledge. Hierarchical transformers pave the way for
NLP systems that are more robust, flexible, and generalizable across languages
and domains.
Source: http://arxiv.org/abs/2501.10322v1