Abstract

A computing device may dynamically generate a graphical user interface that includes content (e.g., actions, shortcuts, widgets, etc.) related to the current context (e.g., historical user patterns, location, date and time, etc.). The computing device (e.g., a wearable device such as a watch, ring, glasses, etc., a mobile device such as a smart phone, laptop, etc. or other type of computing device) may use the current context to predict what a user of the computing device may want to do (e.g., predicted actions) so that, rather than requiring a user to navigate through several levels of a user interface, the user may be able to quickly and easily directly select various buttons, icons, or other user interface elements associated with the predicted actions. In some instances, the computing device may also provide a way for the user to provide additional inputs, such as speech, text, scribble, or gesture inputs. The computing device may dynamically generate the graphical user interface by providing the current context to a machine learning model, such as a large language model, which may return the predicted actions. Using the predicted actions, the computing device may select various user interface elements for those actions and/or may provide the predicted actions to another machine learning model to dynamically generate the code for the user interface elements.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS