Diffusion Autoencoders are Scalable Image Tokenizers

Authors: Yinbo Chen, Rohit Girdhar, Xiaolong Wang, Sai Saketh Rambhatla, Ishan Misra

Abstract: Tokenizing images into compact visual representations is a key step in
learning efficient and high-quality image generative models. We present a
simple diffusion tokenizer (DiTo) that learns compact visual representations
for image generation models. Our key insight is that a single learning
objective, diffusion L2 loss, can be used for training scalable image
tokenizers. Since diffusion is already widely used for image generation, our
insight greatly simplifies training such tokenizers. In contrast, current
state-of-the-art tokenizers rely on an empirically found combination of
heuristics and losses, thus requiring a complex training recipe that relies on
non-trivially balancing different losses and pretrained supervised models. We
show design decisions, along with theoretical grounding, that enable us to
scale DiTo for learning competitive image representations. Our results show
that DiTo is a simpler, scalable, and self-supervised alternative to the
current state-of-the-art image tokenizer which is supervised. DiTo achieves
competitive or better quality than state-of-the-art in image reconstruction and
downstream image generation tasks.

Source: http://arxiv.org/abs/2501.18593v1

About the Author

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like these