Sorry, you need to enable JavaScript to visit this website.

EMPIRICAL ANALYSIS OF OVERFITTING AND MODE DROP IN GAN TRAINING

Citation Author(s):
Chuan-Sheng Foo, Kim-Hui Yap, Vijay Chandrasekhar
Submitted by:
Yasin YAZICI
Last updated:
3 November 2020 - 12:06am
Document Type:
Presentation Slides
Document Year:
2020
Event:
Presenters Name:
Yasin Yazici
Paper Code:
ARS-03.19

Abstract 

Abstract: 

We examine two key questions in GAN training, namely overfitting and mode drop, from an empirical perspective. We show that when stochasticity is removed from the training procedure, GANs can overfit and exhibit almost no mode drop. Our results shed light on important characteristics of the GAN training procedure. They also provide evidence against prevailing intuitions that GANs do not memorize the training set, and that mode dropping is mainly due to properties of the GAN objective rather than how it is optimized during training.

up
0 users have voted:

Dataset Files

ICIP2020_Presentation.pptx

(78)