Abstract

Some computer users have dexterity limitations or motor impairments, resulting in a reduced ability (or inability) to use a computer mouse, trackpad, or keyboard to effect actions on the computer. Existing accessibility techniques that are based on eye-gaze tracking, head movements, etc. have limited functionality, lack robustness, or require specialized hardware. This disclosure describes techniques that leverage machine learning (particularly, face-landmark tracking and gesture recognition), camera-frame extraction, and low-level system integration to enable users to control mouse buttons, screen cursor, keyboard, etc. and to perform operating-system (OS) actions with eye and/or face gestures. Per the techniques, face/eye gestures that are bound to key presses can be used to trigger actions by an OS or application software. The techniques enable dexterity-limited or motor-impaired users to access computing in a manner that complements and augments other assistive technologies. The user is offered a more diverse assistive tool-set to customize their experience. The techniques can be implemented on any computing device (e.g., laptop, desktop computer, tablet, etc.) with a forward-facing or external camera, and require no additional or specialized hardware.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS