Scaling 4D Representations

Authors: João Carreira, Dilara Gokay, Michael King, Chuhan Zhang, Ignacio Rocco, Aravindh Mahendran, Thomas Albert Keck, Joseph Heyward, Skanda Koppula, Etienne Pot, Goker Erdogan, Yana Hasson, Yi Yang, Klaus Greff, Guillaume Le Moing, Sjoerd van Steenkiste, Daniel Zoran, Drew A. Hudson, Pedro Vélez, Luisa Polanía, Luke Friedman, Chris Duvarney, Ross Goroshin, Kelsey Allen, Jacob Walker, Rishabh Kabra, Eric Aboussouan, Jennifer Sun, Thomas Kipf, Carl Doersch, Viorica Pătrăucean, Dima Damen, Pauline Luc, Mehdi S. M. Sajjadi, Andrew Zisserman

Abstract: Scaling has not yet been convincingly demonstrated for pure self-supervised
learning from video. However, prior work has focused evaluations on
semantic-related tasks $\unicode{x2013}$ action classification, ImageNet
classification, etc. In this paper we focus on evaluating self-supervised
learning on non-semantic vision tasks that are more spatial (3D) and temporal
(+1D = 4D), such as camera pose estimation, point and object tracking, and
depth estimation. We show that by learning from very large video datasets,
masked auto-encoding (MAE) with transformer video models actually scales,
consistently improving performance on these 4D tasks, as model size increases
from 20M all the way to the largest by far reported self-supervised video model
$\unicode{x2013}$ 22B parameters. Rigorous apples-to-apples comparison with
many recent image and video models demonstrates the benefits of scaling 4D
representations.

Source: http://arxiv.org/abs/2412.15212v1

About the Author

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like these