Sorry, you need to enable JavaScript to visit this website.

ENABLING ON-DEVICE TRAINING OF SPEECH RECOGNITION MODELS WITH FEDERATED DROPOUT

Citation Author(s):
Dhruv Guliani, Lillian Zhou, Changwan Ryu, Tien-Ju Yang, Harry Zhang, Yonghui Xiao, Françoise Beaufays, Giovanni Motta
Submitted by:
Dhruv Guliani
Last updated:
23 May 2022 - 4:58am
Document Type:
Presentation Slides
Document Year:
2022
Event:
Presenters:
Dhruv Guliani
Paper Code:
MLSP-L4.1

Abstract

Federated learning can be used to train machine learning models on the edge on local data that never leave devices, providing privacy by default. This presents a challenge pertaining to the communication and computation costs associated with clients’ devices. These costs are strongly correlated with the size of the model being trained, and are significant for state-of-the-art automatic speech recognition models.We propose using federated dropout to reduce the size of client models while training a full-size model server-side. We provide empirical evidence of the effectiveness of federated dropout, and propose a novel approach to vary the dropout rate applied at each layer. Furthermore, we find that federated dropout enables a set of smaller sub-models within the larger model to independently have low word error rates, making it easier to dynamically adjust the size of the model deployed for inference.

up
0 users have voted:

Files

[Presentation] ICASSP 2022 Federated Dropout.pdf

(59)