Abstract

A system is described that enables a computing device (e.g., a mobile phone, a smartwatch, a tablet computer, etc.) to take actions based on a user input received at an interface element, such as a direction of an air gesture input or a duration of the air gesture input. The computing device may use a radio detection and ranging (radar) system to detect air gestures as inputs. An air gesture input refers to any non-touch input detected by the computing device including, for example, any gesture performed in the “air” above or below a surface of the computing device using any finger, hand, body part, stylus, or any other object that may be detected by the computing device as described herein. The computing device may record gesturing data or typing data and may use the recorded data to train a machine-learned model. In response to recognizing an air gesture, the computing device may determine a specific action to perform. In some examples, the computing device may detect air gestures (e.g., swipe left, swipe right, swipe up, swipe down, etc.) to navigate media content (e.g., a video, an audio, a picture, a slideshow, etc.) in forward or backward direction (e.g., next, previous, fast forward, rewind, etc.). In some examples, the computing device may track a finger or other object approaching a touchscreen of the computing device or hovering over a virtual keyboard displayed on the touchscreen and may predict a landing spot of the object on the touchscreen. The computing device may enlarge or magnify a keyboard letter, a button, or other graphical element displayed at the predicted landing spot. In other examples, the computing device may magnify and edit (e.g., cut, copy, paste, delete, etc.) content or text based on the detected air gestures.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS