Sorry, you need to enable JavaScript to visit this website.

DISTRIBUTION PADDING IN CONVOLUTIONAL NEURAL NETWORKS

Citation Author(s):
Anh-Duc Nguyen, Seonghwa Choi, Woojae Kim, Sewoong Ahn, Jinwoo Kim, Sanghoon Lee
Submitted by:
Duc Nguyen
Last updated:
17 September 2019 - 3:06am
Document Type:
Presentation Slides
Document Year:
2019
Event:
Presenters:
Duc Nguyen
Paper Code:
ICIP-2737

Abstract

Even though zero padding is usually a staple in convolutional
neural networks to maintain the output size, it is highly suspicious
because it significantly alters the input distribution
around border region. To mitigate this problem, in this paper,
we propose a new padding technique termed as distribution
padding. The goal of the method is to approximately maintain
the statistics of the input border regions. We introduce
two different ways to achieve our goal. In both approaches,
the padded values are derived from the means of the border
patches, but those values are handled in a different way in
each variant. Through extensive experiments on image classification
and style transfer using different architectures, we
demonstrate that the proposed padding technique consistently
outperforms the default zero padding, and hence can be a potential
candidate for its replacement.

up
0 users have voted:

Files

distributional padding.pptx

(194)