Authors: David M. Chan, Rodolfo Corona, Joonyong Park, Cheol Jun Cho, Yutong Bai, Trevor Darrell
Abstract: With the introduction of transformer-based models for vision and language
tasks, such as LLaVA and Chameleon, there has been renewed interest in the
discrete tokenized representation of images. These models often treat image
patches as discrete tokens, analogous to words in natural language, learning
joint alignments between visual and human languages. However, little is known
about the statistical behavior of these visual languages – whether they follow
similar frequency distributions, grammatical structures, or topologies as
natural languages. In this paper, we take a natural-language-centric approach
to analyzing discrete visual languages and uncover striking similarities and
fundamental differences. We demonstrate that, although visual languages adhere
to Zipfian distributions, higher token innovation drives greater entropy and
lower compression, with tokens predominantly representing object parts,
indicating intermediate granularity. We also show that visual languages lack
cohesive grammatical structures, leading to higher perplexity and weaker
hierarchical organization compared to natural languages. Finally, we
demonstrate that, while vision models align more closely with natural languages
than other models, this alignment remains significantly weaker than the
cohesion found within natural languages. Through these experiments, we
demonstrate how understanding the statistical properties of discrete visual
languages can inform the design of more effective computer vision models.
Source: http://arxiv.org/abs/2411.05001v1