Sorry, you need to enable JavaScript to visit this website.

Federated Dataset Dictionary Learning For Multi-Source Domain Adaptation

DOI:
10.60864/k565-3k67
Citation Author(s):
Fabiola Espinoza Castellon, Eduardo Fernandes Montesuma, Fred Ngolè Mboula, Aurélien Mayoue, Antoine Souloumiac, Cédric Gouy-Pailler
Submitted by:
Eduardo Fernand...
Last updated:
6 June 2024 - 10:50am
Document Type:
Poster
Document Year:
2024
Event:
Presenters:
Eduardo Fernandes Montsuma
Paper Code:
MLSP-P13.3
 

In this article, we propose an approach for federated domain adaptation, a setting where distributional shift exists among clients and some have unlabeled data. The proposed framework, FedDaDiL, tackles the resulting challenge through dictionary learning of empirical distributions. In our setting, clients' distributions represent particular domains, and FedDaDiL collectively trains a federated dictionary of empirical distributions. In particular, we build upon the Dataset Dictionary Learning framework by designing collaborative communication protocols and aggregation operations. The chosen protocols keep clients' data private, thus enhancing overall privacy compared to its centralized counterpart. We empirically demonstrate that our approach successfully generates labeled data on the target domain with extensive experiments on (i) Caltech-Office, (ii) TEP, and (iii) CWRU benchmarks. Furthermore, we compare our method to its centralized counterpart and other benchmarks in federated domain adaptation.

up
0 users have voted: