Latent Swap Joint Diffusion for Long-Form Audio Generation

Authors: Yusheng Dai, Chenxi Wang, Chang Li, Chen Wang, Jun Du, Kewei Li, Ruoyu Wang, Jiefeng Ma, Lei Sun, Jianqing Gao

Abstract: Previous work on long-form audio generation using global-view diffusion or
iterative generation demands significant training or inference costs. While
recent advancements in multi-view joint diffusion for panoramic generation
provide an efficient option, they struggle with spectrum generation with severe
overlap distortions and high cross-view consistency costs. We initially explore
this phenomenon through the connectivity inheritance of latent maps and uncover
that averaging operations excessively smooth the high-frequency components of
the latent map. To address these issues, we propose Swap Forward (SaFa), a
frame-level latent swap framework that synchronizes multiple diffusions to
produce a globally coherent long audio with more spectrum details in a
forward-only manner. At its core, the bidirectional Self-Loop Latent Swap is
applied between adjacent views, leveraging stepwise diffusion trajectory to
adaptively enhance high-frequency components without disrupting low-frequency
components. Furthermore, to ensure cross-view consistency, the unidirectional
Reference-Guided Latent Swap is applied between the reference and the
non-overlap regions of each subview during the early stages, providing
centralized trajectory guidance. Quantitative and qualitative experiments
demonstrate that SaFa significantly outperforms existing joint diffusion
methods and even training-based long audio generation models. Moreover, we find
that it also adapts well to panoramic generation, achieving comparable
state-of-the-art performance with greater efficiency and model
generalizability. Project page is available at https://swapforward.github.io/.

Source: http://arxiv.org/abs/2502.05130v1

About the Author

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like these