Wearable devices such as augmented reality/mixed reality glasses can detect a user’s gaze and objects that are within a user’s field of view. The devices also include virtual assistant functionality that enables a user to provide commands to a virtual assistant, e.g., via voice, gestures, or other input. This disclosure leverages both these aspects to provide recommended actions to a user based on objects that the user gazes at. Upon detection that the user is gazing at a particular object, recommended actions for the virtual assistant are identified and ranked based on the object and the user’s context. The ranked actions are provided to the user via a suitable modality, e.g., displayed via their glasses, provided as audio, etc. based on the user’s context, along with indication of input the user can provide to trigger the virtual assistant to perform the action(s). The input modality can be selected based on the user’s context. The recommended actions can be across various domains that the virtual assistant supports, e.g., communications, shopping, scheduling, providing information, etc.
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.
Anonymous, "Contextual Action Recommendations For A Virtual Assistant Based On Gaze Information", Technical Disclosure Commons, (October 08, 2020)