Inventor(s)

AnonymousFollow

Abstract

Current touchscreen interfaces are unable to distinguish between individual fingers or to determine poses associated with the user’s hand. This limits the use of touchscreens in recognizing user input. As discussed herein, a statistical model can be trained using training data that includes sensor readings known to be associated with various hand poses and gestures. The trained statistical model can be configured to determine arm, hand, and/or figure configurations and forces (e.g., handstates) based on sensor readings, e.g., obtained via a wearable device such as a wristband with wearable sensors. The statistical model can identify the input from the handstate detected by the wearable device. For example, the handstates can include identification of a portion of the hand that is interacting with the touchscreen, a user’s finger position relative to the touchscreen, an identification of which finger or fingers of the user’s hand are interacting with the touchscreen, etc. The handstates can be used to control any aspect(s) of the touchscreen or a connected device indirectly through the touchscreen.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS