Abstract

This publication describes systems and methods for delegating energy-intensive artificial intelligence (AI) tasks from a mobile device. The use of local compute (LC) devices is disclosed, where the LC devices are used to process AI workloads (e.g., large language model (LLM) queries, AI inference tasks) from the mobile device. Methodology for selection between original equipment manufacturer (OEM) devices, LC devices, and other devices for processing AI workloads is outlined.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS