Abstract
In extended reality (XR) environments, persistent highlighting of user interface (UI) elements based on gaze can distract users during non-interactive tasks, such as reading. Conversely, a complete lack of highlighting can reduce interaction precision and increase the risk of accidentally activating critical controls. The disclosed technology is a method for adaptive gaze highlighting that dynamically controls the visibility of these visual cues. The method utilizes a model to analyze multimodal inputs, such as gaze patterns, gestural cues, and application context, to predict a user’s intent to interact. A necessity score is calculated based on the UI element’s characteristics and the potential consequences of an incorrect interaction. This allows for highlights to be suppressed during passive viewing and displayed only when interaction is probable, thereby reducing cognitive load while ensuring interaction accuracy when needed.
Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 License.
Recommended Citation
Chugh, Tushar; Mone, Aditya Shrikant; and Kumara, Karthik, "Predicting User Interaction Intent for Adaptive UI Highlighting in Extended Reality Environments", Technical Disclosure Commons, (December 15, 2025)
https://www.tdcommons.org/dpubs_series/9028