Inventor(s)

WeiChung ChangFollow

Abstract

Conventional input methods for compact electronic devices, such as on-screen virtual keyboards, suffer from limitations including cramped layouts, a lack of tactile feedback, and unsuitability for touchless environments like augmented reality. These issues often lead to frequent typing errors and reduced usability for complex tasks. To address these and other challenges, a camera-based virtual input method is described. Using the camera of a device, images of a user's hand and finger movements may be captured as they tap on any surface or imitate typing in the air. A prediction model analyzes these movements, not as absolute positions, but as relative input patterns (sequences of motion and their spatial relationships). These patterns are combined with contextual information for more accurate interpretation. This contextual information may include the user's typing speed, language models, user history, and other context to infer the intended input, even when gestures are imprecise. A hybrid local/cloud architecture (which uses both on-device and cloud processing) may also be used to balance responsiveness and accuracy. Ultimately, described implementations may provide a flexible, accurate, and power-efficient input solution for various devices. This solution overcomes the constraints of physical and on-screen keyboards for a range of mobile and augmented reality applications.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS