LARP: Tokenizing Videos with a Learned Autoregressive Generative Prior

Authors: Hanyu Wang, Saksham Suri, Yixuan Ren, Hao Chen, Abhinav Shrivastava

Abstract: We present LARP, a novel video tokenizer designed to overcome limitations in
current video tokenization methods for autoregressive (AR) generative models.
Unlike traditional patchwise tokenizers that directly encode local visual
patches into discrete tokens, LARP introduces a holistic tokenization scheme
that gathers information from the visual content using a set of learned
holistic queries. This design allows LARP to capture more global and semantic
representations, rather than being limited to local patch-level information.
Furthermore, it offers flexibility by supporting an arbitrary number of
discrete tokens, enabling adaptive and efficient tokenization based on the
specific requirements of the task. To align the discrete token space with
downstream AR generation tasks, LARP integrates a lightweight AR transformer as
a training-time prior model that predicts the next token on its discrete latent
space. By incorporating the prior model during training, LARP learns a latent
space that is not only optimized for video reconstruction but is also
structured in a way that is more conducive to autoregressive generation.
Moreover, this process defines a sequential order for the discrete tokens,
progressively pushing them toward an optimal configuration during training,
ensuring smoother and more accurate AR generation at inference time.
Comprehensive experiments demonstrate LARP’s strong performance, achieving
state-of-the-art FVD on the UCF101 class-conditional video generation
benchmark. LARP enhances the compatibility of AR models with videos and opens
up the potential to build unified high-fidelity multimodal large language
models (MLLMs).

Source: http://arxiv.org/abs/2410.21264v1

About the Author

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like these