Abstract

When a user issues a query, e.g., a spoken query to a user device such as a smartphone, smart speaker, in-car device, etc., the query may be processed locally on-device and additionally, remotely on a server (if permitted by the user). The determination of whether a query is processed locally or on a remote device is typically based on whether the device has a network connection. Local processing of queries can consume device resources. When the device is simultaneously in use for other critical tasks, such resource demand can have a negative impact on such tasks. This disclosure describes the use of a trained machine learning model that takes into account user-permitted contextual factors to determine whether query processing is to be performed on-device or on a remote server.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS