Our approach, BOOTPLACE, detects regions of interest (represented as bounding boxes) for object composition and assigns each target object to its best-matched detected region. Each object is connected to each detected region with weighted connections, with the bold arrow indicating the strongest link.
In this paper, we tackle the copy-paste image-to-image composition problem with a focus on object placement learning. Prior methods have leveraged generative models to reduce the reliance for dense supervision. However, this often limits their capacity to model complex data distributions. Alternatively, transformer networks with a sparse contrastive loss have been explored, but their over-relaxed regularization often leads to imprecise object placement. We introduce BOOTPLACE, a novel paradigm that formulates object placement as a placement-by-detection problem. Our approach begins by identifying suitable regions of interest for object placement. This is achieved by training a specialized detection transformer on object-subtracted backgrounds, enhanced with multi-object supervisions. It then semantically associates each target compositing object with detected regions based on their complementary characteristics. Through a boostrapped training approach applied to randomly object-subtracted images, our model enforces meaningful placements through extensive paired data augmentation. Experimental results on established benchmarks demonstrate BOOTPLACE's superior performance in object repositioning, markedly surpassing state-of-the-art baselines on Cityscapes and OPA datasets with notable improvements in IOU scores. Additional ablation studies further showcase the compositionality and generalizability of our approach, supported by user study evaluations.
Network inference. Given a target image, several object queries (e.g., two cars and a pedestrian) and scene object locations, BOOTPLACE detects a set of candidate region of interest and associates each object with the best-fitting region, which are used to produce the composite image. ⛒ is feature concatenation and ▿ is region-wise product.
Network architecture and training. We prepare training data by first decomposing a source image into a randomly-object-subtracted image I and a set of object queries. During training, image I and scene object locations are both fed into a detection transformer for region-of-interest detection. The object queries are fed into an association network for object-to-region matching, where the generated association links each object query with the detected region of interest. Losses comprises of detection loss and association loss. At the high level, we visualize the relations among object queries, detected regions of interest and ground-truth locations on the right side, where the best-matched association arrow is highlighted in bold.
@article{zhou2025bootplace,
title = {{BOOTPLACE}: Bootstrapped Object Placement with Detection Transformers},
author = {Zhou, Hang and Zuo, Xinxin and Ma, Rui and Cheng, Li},
journal = {arXiv preprint arXiv:},
year = {2025}
}