Abstract

Automatic speech recognition techniques implemented in a virtual assistant or other application can sometimes fail to correctly transcribe a user query, even while utilizing user-permitted contextual information. Query recognition failures lead to misunderstanding of user intent. While increasing the strength of contextual biasing of speech recognition can fix the problem of misrecognition, too strong a bias can hurt queries that don’t benefit from such adjustments. This disclosure describes techniques that, upon failure to find suitable transcription, intent, or response, increase contextual bias provided to the ASR and re-run speech recognition to obtain a better transcription of user speech and determination of user intent. Effectively, two or more speech recognition results are obtained, each at differing contextual bias strengths. The result with the best intent is selected and a corresponding response is provided to the query.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS