Abstract
Obtaining information related to content viewed from a device currently requires users to engage in burdensome actions such as asking questions to an AI assistant or specifying a search query, both of which require the user to provide the context for their query. This disclosure describes a simple gesture-based user interface that helps users satisfy information needs from content they view. When the user performs a gesture, a prompt is automatically provided to a large language model (LLM) to generate relevant answers. With user permission, the LLM uses the user's current content and profile information to create personalized answers and/or suggested actions.
Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 License.
Recommended Citation
Kuang, Cliff, "Automatically Displaying Contextually Relevant Information in Response to Gesture Input on Displayed Content", Technical Disclosure Commons, (November 20, 2025)
https://www.tdcommons.org/dpubs_series/8914