Abstract

Large language models can enable application programmers to translate user instructions in natural language into a sequence of corresponding function calls; however, currently LLMs that provide such capabilities are housed in the cloud. Use of remotely hosted LLMs can introduce operational latency, affect reliability, and can be infeasible in certain cases. This disclosure describes the use of an on-device LLM to enable app developers to support natural language interactions with users without having to implement code within their applications. Per the techniques, with user permission, a user command in natural language as received by an application is provided to the on-device LLM that generates an appropriate sequence of function calls that can be executed one at a time, with each function receiving the output of the prior function, to perform the requested task. The techniques enable a deep level of natural language conversational interaction within any application by utilizing application and user context as inputs to the LLM that generates the function calls. The use of application-defined function definitions makes it easier for developers to control the actions of the model, providing greater stability than other approaches such as direct UI control.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS