Abstract
This disclosure describes techniques to manage tasks that are executed by software agents in graphical environments such as extended reality environments displayed via an extended reality headset and/or other graphical user interface. Per the techniques, tasks may be performed by the user or an agent. A visual indication is provided for a task that indicates whether the task is controlled and performed by the user or by an agent. Panels are displayed in the graphical environment that indicate the task being performed, inputs and/or outputs related to the tasks, and progress in the task. A task performed by an agent can be intercepted by a user to take over control of the task. These techniques enable users to easily distinguish tasks performed by agents and to effectively interrupt agent tasks. The techniques provide an efficient and intuitive user interface for task management. When implemented in an XR headset, the techniques enable users to easily control the user interface via voice, gestures, or other input; to allocate tasks to software agents (supported by an LLM); to monitor and control task performance; etc. The techniques provide easy-to-use mechanisms whereby users can offload tasks to agents and use natural language and/or gesture interaction to control task performance. User friction is reduced by enabling task performance and control without requirement of precise physical motion.
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.
Recommended Citation
Rivas, Diego and Shin, D, "User Interface for Efficient Control of Autonomous Agent Tasks", Technical Disclosure Commons, (December 19, 2024)
https://www.tdcommons.org/dpubs_series/7663