Abstract

This document describes techniques that enable a computing device (e.g., a smart watch, ring, glasses, etc.) to support user-defined contextual information display and configuration through natural language. The techniques of this disclosure may address the problem of complicated menus and settings and unreliable predictive solutions for displaying personalized and/or general information (e.g., stock prices) by introducing voice controls and artificial intelligence (AI) assistance. The techniques of this disclosure leverage voice input and natural language processing to allow a user to specify what information the user wants to see and the conditions under which the information should be displayed. In one example, an AI model may extract trigger conditions and content from the user's natural language input, which can be based on location, time, user activity, or other factors. For example, the user may say “show my commute time between 1-3pm” or “display my shopping list.” The computing device may then dynamically display the requested information on a chosen surface, such as a watch face, notification, or widget. Contextual awareness may thus allow the AI model to connect a natural language interface (NLI) with smart device personalization. Described techniques may enable users to establish conditions directly via voice so the information may appear at the right time and location based on the conditions. In this way, the techniques of this disclosure may reduce the complexity of configuring the computing device and improve the user experience.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS