Authors: Sili Chen, Hengkai Guo, Shengnan Zhu, Feihu Zhang, Zilong Huang, Jiashi Feng, Bingyi Kang
Abstract: Depth Anything has achieved remarkable success in monocular depth estimation
with strong generalization ability. However, it suffers from temporal
inconsistency in videos, hindering its practical applications. Various methods
have been proposed to alleviate this issue by leveraging video generation
models or introducing priors from optical flow and camera poses. Nonetheless,
these methods are only applicable to short videos (< 10 seconds) and require a
trade-off between quality and computational efficiency. We propose Video Depth
Anything for high-quality, consistent depth estimation in super-long videos
(over several minutes) without sacrificing efficiency. We base our model on
Depth Anything V2 and replace its head with an efficient spatial-temporal head.
We design a straightforward yet effective temporal consistency loss by
constraining the temporal depth gradient, eliminating the need for additional
geometric priors. The model is trained on a joint dataset of video depth and
unlabeled images, similar to Depth Anything V2. Moreover, a novel
key-frame-based strategy is developed for long video inference. Experiments
show that our model can be applied to arbitrarily long videos without
compromising quality, consistency, or generalization ability. Comprehensive
evaluations on multiple video benchmarks demonstrate that our approach sets a
new state-of-the-art in zero-shot video depth estimation. We offer models of
different scales to support a range of scenarios, with our smallest model
capable of real-time performance at 30 FPS.
Source: http://arxiv.org/abs/2501.12375v1