Documents
supplementary material
Shape-guided object inpainting supplementary material
- Citation Author(s):
- Submitted by:
- Yu Zeng
- Last updated:
- 31 December 2024 - 6:12pm
- Document Type:
- supplementary material
- Categories:
- Log in to post comments
Previous works on image inpainting mainly focus on inpainting background or partially missing objects, while the problem of inpainting an entire missing object remains unexplored.
This work studies a new image inpainting problem,~\ie shape-guided object inpainting. Given an incomplete input image, the goal is to fill in the hole by generating an object based on the context and the implicit guidance provided by the hole shape.
We propose a new data preparation method and a novel Contextual Object Generator for the object inpainting task.
On the data side, we incorporate object priors into training data by using object instances as holes. The Contextual Object Generator is a two-stream architecture that combines the standard bottom-up image completion process with a top-down object generation process. A predictive class embedding module bridges the two streams by predicting the category label of the missing object from the bottom-up features, from which a semantic object map is derived as the input of the top-down stream.
Experiments demonstrate that the proposed method can generate realistic objects that fit the context in terms of both visual appearance and semantic meanings. Code will be made publicly available upon the acceptance of the paper.