Sorry, you need to enable JavaScript to visit this website.

Exploring Heterogeneous Characteristics of Layers in ASR Models for More Efficient Training

Primary tabs

Citation Author(s):
Lillian Zhou, Dhruv Guliani, Andreas Kabel, Giovanni Motta, Françoise Beaufays
Submitted by:
Lillian Zhou
Last updated:
11 May 2022 - 2:16pm
Document Type:
Poster
Document Year:
2022
Event:
Presenters Name:
Lillian Zhou
Paper Code:
MLSP-20.1

Abstract 

Abstract: 

Transformer-based architectures have been the subject of research aimed at understanding their overparameterization and the non-uniform importance of their layers. Applying these approaches to Automatic Speech Recognition, we demonstrate that the state-of-the-art Conformer models generally have multiple ambient layers. We study the stability of these layers across runs and model sizes, propose that group normalization may be used without disrupting their formation, and examine their correlation with model weight updates in each layer. Finally, we apply these findings to Federated Learning in order to improve the training procedure, by targeting Federated Dropout to layers by importance. This allows us to reduce the model size optimized by clients without quality degradation, and shows potential for future exploration.

up
0 users have voted:

Dataset Files

Poster - Exploring Heterogeneous Characteristics of Layers in ASR Models for More Efficient Training

(10)