A Spitting Image: Modular Superpixel Tokenization in Vision Transformers

Authors: Marius Aasan, Odd Kolbjørnsen, Anne Schistad Solberg, Adín Ramirez Rivera

Abstract: Vision Transformer (ViT) architectures traditionally employ a grid-based
approach to tokenization independent of the semantic content of an image. We
propose a modular superpixel tokenization strategy which decouples tokenization
and feature extraction; a shift from contemporary approaches where these are
treated as an undifferentiated whole. Using on-line content-aware tokenization
and scale- and shape-invariant positional embeddings, we perform experiments
and ablations that contrast our approach with patch-based tokenization and
randomized partitions as baselines. We show that our method significantly
improves the faithfulness of attributions, gives pixel-level granularity on
zero-shot unsupervised dense prediction tasks, while maintaining predictive
performance in classification tasks. Our approach provides a modular
tokenization framework commensurable with standard architectures, extending the
space of ViTs to a larger class of semantically-rich models.

Source: http://arxiv.org/abs/2408.07680v1

About the Author

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like these