Sorry, you need to enable JavaScript to visit this website.

Detection and Attribution of Models Trained on Generated Data

Citation Author(s):
Ge Han, Ahmed Salem, Zheng Li, Shanqing Guo, Michael Backes, Yang Zhang
Submitted by:
Ge Han
Last updated:
8 April 2024 - 10:18pm
Document Type:
Poster
Document Year:
2024
Event:
Presenters:
Ge Han
Paper Code:
IFS-P6.4
 

Generative Adversarial Networks (GANs) have become widely used in model training, as they can improve performance and/or protect sensitive information by generating data. However, this also raises potential risks, as malicious GANs may compromise or sabotage models by poisoning their training data. Therefore, it is important to verify the origin of a model’s training data for accountability purposes. In this work, we take the first step in the forensic analysis of models trained on GAN-generated data. Specifically, we first detect whether a model is trained on GAN-generated or real data. We then attribute these models, trained on GAN-generated data, to their respective source GANs. We conduct extensive experiments on three datasets, using four popular GAN architectures and four common model architectures. Empirical results show the remarkable performance of our detection and attribution methods. Furthermore, we conduct a more in-depth study and reveal that models trained on various data sources exhibit different decision boundaries and behaviours.

up
0 users have voted: