Abstract
Navigating interactive elements in augmented and extended reality (AR/XR) environments often relies on hand gestures, controllers, or long-form voice commands, which can be imprecise, physically fatiguing, and inefficient. Current methods lack a mechanism for rapid, low-effort, hands-free target selection. This disclosure describes a process where a user initiates an interaction mode with a simple voice command or gesture. In response, all visible interactive elements are overlaid with unique, easily pronounceable syllables or words. The user selects a target by speaking the corresponding syllable or word, which triggers the associated action. This technique provides a fast, precise, and discreet method for interacting with user interfaces in spatial computing, reducing physical effort and cognitive load compared to conventional selection methods.
Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 License.
Recommended Citation
Flora, Tiago Camolesi, "Fast Mechanism for Target Selection in Interactive AR/XR Environments", Technical Disclosure Commons, (November 05, 2025)
https://www.tdcommons.org/dpubs_series/8838