Abstract
Stylus apps support handwritten gestures for common actions by providing a menu to distinguish strokes from gestures. However, switching between the menu and the canvas can create context-switching friction. Moreover, attempting to reduce friction by eliminating the context-switch can lead to gestures being triggered unintentionally, unreliably, or unpredictably. This disclosure describes a multimodal gesture interface with a touchscreen that, with user permission, interprets spoken input provided by the user to confirm a gesture and its intended behavior. For example, after drawing a circle the user can say “select” to trigger the select gesture for strokes that the circle bounds. The user can alternatively say nothing to retain the circle as a stroke. As another example, the user can speak a different command, e.g., “delete” to erase strokes that the circle intersects.
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.
Recommended Citation
Buckley, Thomas; Mickle, Robert; Murray, Abraham; Al Karim, Tayeb; Chan, Hiu Ying; and Cirimele, Maria, "Multi-modal Gesture Triggering for Touchscreen Input", Technical Disclosure Commons, (September 13, 2021)
https://www.tdcommons.org/dpubs_series/4581