Inventor(s)

Dongeek ShinFollow

Abstract

This document describes techniques that enable a mobile computing device (e.g., a smartphone, smart watch, wearable device, laptop computer, etc.) and an audio device (e.g., earbuds, headset, augmented reality glasses, etc.) to enhance conversational artificial intelligence (AI) with contextual signals (e.g., location data, ambient light, motion signals, etc.). For instance, the mobile computing device may be paired with the audio device, to which a user provides voice input to engage with the AI. However, the AI’s responses often rely on spoken commands and prior conversation history, which may reduce the AI’s understanding of the user's full context. The techniques of this publication may utilize data from various sensors (e.g., inertial measurement unit (IMU), ambient light sensor (ALS), magnetometer, etc.) to provide contextual augmentation to human-AI conversations. These sensors can detect non-audio cues like motion (e.g., user activity, body movement, etc.), ambient light, or orientation (e.g., direction, digital compass data, etc.). By processing the non-audio signals and user state information, the AI may gain a deeper situational awareness and a better understanding of the user's current context. Furthermore, the AI may incorporate the current context when devising responses. The technology may additionally incorporate a proactive mode, where the AI utilizes contextual signals to initiate a conversation with the user (e.g., deliver a reminder to the user upon entering a certain location). In this way, the techniques of this publication may enhance the contextual accuracy and responsiveness of conversational AI by integrating non-audio sensor data.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS