Stag-1: Towards Realistic 4D Driving Simulation with Video Generation Model

Authors: Lening Wang, Wenzhao Zheng, Dalong Du, Yunpeng Zhang, Yilong Ren, Han Jiang, Zhiyong Cui, Haiyang Yu, Jie Zhou, Jiwen Lu, Shanghang Zhang

Abstract: 4D driving simulation is essential for developing realistic autonomous
driving simulators. Despite advancements in existing methods for generating
driving scenes, significant challenges remain in view transformation and
spatial-temporal dynamic modeling. To address these limitations, we propose a
Spatial-Temporal simulAtion for drivinG (Stag-1) model to reconstruct
real-world scenes and design a controllable generative network to achieve 4D
simulation. Stag-1 constructs continuous 4D point cloud scenes using
surround-view data from autonomous vehicles. It decouples spatial-temporal
relationships and produces coherent keyframe videos. Additionally, Stag-1
leverages video generation models to obtain photo-realistic and controllable 4D
driving simulation videos from any perspective. To expand the range of view
generation, we train vehicle motion videos based on decomposed camera poses,
enhancing modeling capabilities for distant scenes. Furthermore, we reconstruct
vehicle camera trajectories to integrate 3D points across consecutive views,
enabling comprehensive scene understanding along the temporal dimension.
Following extensive multi-level scene training, Stag-1 can simulate from any
desired viewpoint and achieve a deep understanding of scene evolution under
static spatial-temporal conditions. Compared to existing methods, our approach
shows promising performance in multi-view scene consistency, background
coherence, and accuracy, and contributes to the ongoing advancements in
realistic autonomous driving simulation. Code: https://github.com/wzzheng/Stag.

Source: http://arxiv.org/abs/2412.05280v1

About the Author

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like these