While speech is an important input mechanism used in many products, interpretation of user speech is challenging when the user input is ambiguous, e.g., due to the presence of punctuation or commands, as opposed to only verbatim text. This disclosure describes the utilization of a combination of speech analysis and gesture recognition to automatically disambiguate between verbatim text input (dictation) and commands. User provided speech and gestures are analyzed and used for interpretation of the spoken input, without the user having to switch between text entry and command entry modes.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.