Abstract

Smart glasses and other wearable devices that include cameras can capture the user’s environment as memories and can also answer questions or surface assistance proactively. However, smart glasses have limited compute and storage capabilities, and enabling continuous perception is therefore infeasible. A paired device such as a smartphone can enable smart glasses to operate in continuous perception mode and also provide compute power for artificial intelligence models. However, a key bottleneck is the data transfer between the smart glasses and the paired device, which is constrained by wireless data transfer rates. Even with aggressive data-compression schemes, the payload over the glasses-to-phone link is substantial. This disclosure describes techniques that, with user permission, use a lightweight gating model on the smart glasses to selectively transmit images to a paired device when the images are deemed of interest. In particular, eye-tracking data is provided as an input to the gating model which selects images (or portions thereof) that are to be sent to the phone. The techniques leverage the observation that it is natural behavior for the user’s gaze to be directed to the portion of the scene in the real world that is of importance.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS