Authors: Qiuhong Shen, Xuanyu Yi, Mingbao Lin, Hanwang Zhang, Shuicheng Yan, Xinchao Wang
Abstract: We consider the problem of efficiently representing casually captured
monocular videos in a spatially- and temporally-coherent manner. While existing
approaches predominantly rely on 2D/2.5D techniques treating videos as
collections of spatiotemporal pixels, they struggle with complex motions,
occlusions, and geometric consistency due to absence of temporal coherence and
explicit 3D structure. Drawing inspiration from monocular video as a projection
of the dynamic 3D world, we explore representing videos in their intrinsic 3D
form through continuous flows of Gaussian primitives in space-time. In this
paper, we propose NutWorld, a novel framework that efficiently transforms
monocular videos into dynamic 3D Gaussian representations in a single forward
pass. At its core, NutWorld introduces a structured spatial-temporal aligned
Gaussian (STAG) representation, enabling optimization-free scene modeling with
effective depth and flow regularization. Through comprehensive experiments, we
demonstrate that NutWorld achieves high-fidelity video reconstruction quality
while enabling various downstream applications in real-time. Demos and code
will be available at https://github.com/Nut-World/NutWorld.
Source: http://arxiv.org/abs/2502.03465v1