Abstract

Techniques are described for augmenting user interaction with smart glasses or other wearable devices that receive audio input. In a configuration phase, a user records an audio signature and links the audio signature to action(s) and/or features to be activated when the audio signature is detected. The audio signature patterns are parameterized based on a number of audio events, a time interval between the events, and the type of audio events. Entropy aware coding is utilized such that the most commonly utilized features are linked to audio signature patterns (codes) that are easier to generate. A trained local audio event classifier operates in a sliding window fashion to generate a sliding window inference. A time series score that is a measure of classifier certainty is determined based on the sliding window inference. A decoder is utilized to determine an intended user routine based on the time series score(s) and the identified user routine is performed, e.g., to wake the device, to launch a particular application, etc. Detection of the audio input can be performed by the wearable device or by other devices such as a smartphone.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS