DI-PCG: Diffusion-based Efficient Inverse Procedural Content Generation for High-quality 3D Asset Creation

Authors: Wang Zhao, Yan-Pei Cao, Jiale Xu, Yuejiang Dong, Ying Shan

Abstract: Procedural Content Generation (PCG) is powerful in creating high-quality 3D
contents, yet controlling it to produce desired shapes is difficult and often
requires extensive parameter tuning. Inverse Procedural Content Generation aims
to automatically find the best parameters under the input condition. However,
existing sampling-based and neural network-based methods still suffer from
numerous sample iterations or limited controllability. In this work, we present
DI-PCG, a novel and efficient method for Inverse PCG from general image
conditions. At its core is a lightweight diffusion transformer model, where PCG
parameters are directly treated as the denoising target and the observed images
as conditions to control parameter generation. DI-PCG is efficient and
effective. With only 7.6M network parameters and 30 GPU hours to train, it
demonstrates superior performance in recovering parameters accurately, and
generalizing well to in-the-wild images. Quantitative and qualitative
experiment results validate the effectiveness of DI-PCG in inverse PCG and
image-to-3D generation tasks. DI-PCG offers a promising approach for efficient
inverse PCG and represents a valuable exploration step towards a 3D generation
path that models how to construct a 3D asset using parametric models.

Source: http://arxiv.org/abs/2412.15200v1

About the Author

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like these