Authors: Yoad Tewel, Rinon Gal, Dvir Samuel Yuval Atzmon, Lior Wolf, Gal Chechik
Abstract: Adding Object into images based on text instructions is a challenging task in
semantic image editing, requiring a balance between preserving the original
scene and seamlessly integrating the new object in a fitting location. Despite
extensive efforts, existing models often struggle with this balance,
particularly with finding a natural location for adding an object in complex
scenes. We introduce Add-it, a training-free approach that extends diffusion
models’ attention mechanisms to incorporate information from three key sources:
the scene image, the text prompt, and the generated image itself. Our weighted
extended-attention mechanism maintains structural consistency and fine details
while ensuring natural object placement. Without task-specific fine-tuning,
Add-it achieves state-of-the-art results on both real and generated image
insertion benchmarks, including our newly constructed “Additing Affordance
Benchmark” for evaluating object placement plausibility, outperforming
supervised methods. Human evaluations show that Add-it is preferred in over 80%
of cases, and it also demonstrates improvements in various automated metrics.
Source: http://arxiv.org/abs/2411.07232v1