Sorry, you need to enable JavaScript to visit this website.

We introduce a new method for generating color images from sketches or edge maps. Current methods either require some form of additional user-guidance or are limited to the ``paired’’ translation approach. We argue that segmentation information could provide valuable guidance for sketch colorization. To this end, we propose to leverage semantic image segmentation, as provided by a general purpose panoptic segmentation network, to create an additional adversarial loss function. Our loss function can be integrated to any baseline GAN model.