Abstract
Federated hotword training enables developing high quality models on real-world user data that is kept entirely on-device. However, such training relies on existing teacher models which limits the quality of the synthetic labels provided to the student model during federated training. This disclosure describes techniques that use a feature-wise linear modulation method to incorporate utterance-level label prompt as an input for federated hotword training by modulating intermediate layer output. Such a model, when trained on central data, can be used as a teacher for federated training that takes place on user devices. The feature-wise modulation layer has the ability to receive utterance-level label prompts, which can be used for training the teacher models centrally. As a result, the teacher model is trained to associate the utterance-level signal with the correct frame-level activation pattern during central training. Such a model can then be deployed as a teacher for federated training. During federated training, on-device signals correlated with utterance-level labels such as output of on-device ASR models, binary classifiers, metadata, etc. are leveraged or improved teacher performance.
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.
Recommended Citation
Fowl, Liam; Moreno, Ignacio Lopez; Peng, Cheng-Chieh; Chen, Justin; Partridge, Kurt; and Chen, Neng, "Teacher Prompting for Federated Hotword Training", Technical Disclosure Commons, (September 07, 2023)
https://www.tdcommons.org/dpubs_series/6232