Abstract
An automated assistant in an augmented reality (AR) device or smartphone performs coreference resolution. The automated assistant resolves references that are mentioned in a user’s dialog, e.g., audio input, by analyzing the audio input, visual input, and stored information that represents the user’s memory. The automated assistant performs the coreference resolution so as conduct an intelligent dialog with the user for shopping, visual question answering (VQA), or other interactive activity.
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.
Recommended Citation
Anonymous, "Multi-Modal Visual and Memory Coreference Resolution", Technical Disclosure Commons, (April 21, 2020)
https://www.tdcommons.org/dpubs_series/3172