Towards Latent Masked Image Modeling for Self-Supervised Visual Representation Learning

Authors: Yibing Wei, Abhinav Gupta, Pedro Morgado

Abstract: Masked Image Modeling (MIM) has emerged as a promising method for deriving
visual representations from unlabeled image data by predicting missing pixels
from masked portions of images. It excels in region-aware learning and provides
strong initializations for various tasks, but struggles to capture high-level
semantics without further supervised fine-tuning, likely due to the low-level
nature of its pixel reconstruction objective. A promising yet unrealized
framework is learning representations through masked reconstruction in latent
space, combining the locality of MIM with the high-level targets. However, this
approach poses significant training challenges as the reconstruction targets
are learned in conjunction with the model, potentially leading to trivial or
suboptimal solutions.Our study is among the first to thoroughly analyze and
address the challenges of such framework, which we refer to as Latent MIM.
Through a series of carefully designed experiments and extensive analysis, we
identify the source of these challenges, including representation collapsing
for joint online/target optimization, learning objectives, the high region
correlation in latent space and decoding conditioning. By sequentially
addressing these issues, we demonstrate that Latent MIM can indeed learn
high-level representations while retaining the benefits of MIM models.

Source: http://arxiv.org/abs/2407.15837v1

About the Author

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like these