Abstract

Artificial Intelligence (AI)/Machine Learning (ML)-based mobile applications are becoming increasingly computation-intensive, memory-consuming, and power-consuming. In addition, end devices usually have stringent energy consumption, compute, and memory limitations for running a complete offline AI/ML inference on-board. Many AI/ML applications currently offload inference processing from mobile devices to internet data centers (IDC). Techniques presented herein provide for discovering the closest offload server so that UE can offload some of the rending work to the offload server.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS