Authors: Cheng Zhang, Yuanhao Wang, Francisco Vicente Carrasco, Chenglei Wu, Jinlong Yang, Thabo Beeler, Fernando De la Torre
Abstract: We introduce FabricDiffusion, a method for transferring fabric textures from
a single clothing image to 3D garments of arbitrary shapes. Existing approaches
typically synthesize textures on the garment surface through 2D-to-3D texture
mapping or depth-aware inpainting via generative models. Unfortunately, these
methods often struggle to capture and preserve texture details, particularly
due to challenging occlusions, distortions, or poses in the input image.
Inspired by the observation that in the fashion industry, most garments are
constructed by stitching sewing patterns with flat, repeatable textures, we
cast the task of clothing texture transfer as extracting distortion-free,
tileable texture materials that are subsequently mapped onto the UV space of
the garment. Building upon this insight, we train a denoising diffusion model
with a large-scale synthetic dataset to rectify distortions in the input
texture image. This process yields a flat texture map that enables a tight
coupling with existing Physically-Based Rendering (PBR) material generation
pipelines, allowing for realistic relighting of the garment under various
lighting conditions. We show that FabricDiffusion can transfer various features
from a single clothing image including texture patterns, material properties,
and detailed prints and logos. Extensive experiments demonstrate that our model
significantly outperforms state-to-the-art methods on both synthetic data and
real-world, in-the-wild clothing images while generalizing to unseen textures
and garment shapes.
Source: http://arxiv.org/abs/2410.01801v1