Abstract

Speech recognition enables users to interact with devices via their voice. However, errors in speech recognition during a user’s interaction with such devices can be problematic and lead to a less than satisfactory user experience. This disclosure describes the use of language modeling to recover from automatic speech recognition (ASR) errors by identifying broken queries. The full natural language understanding (NLU) stack is executed to obtain a coherent, alternative, speech recognition. The alternative recognition (or query) runs in parallel to the original, misrecognized query. The potential actions triggered by the misrecognized and the NLU-augmented queries are compared to pick the query interpretation that is more likely to be correct.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS