Abstract
Introduces a flexible architecture that supports both host off-load and local AI acceleration through a dock equipped with an AI accelerator. Utilizing a shared PCIe interface between the host and local SoC, this method enhances AI processing capabilities, whether connected to network or standalone. The solution ranges from basic PCIe mux to advanced lane partitioning, allowing dynamic allocation of AI cores.
Creative Commons License

This work is licensed under a Creative Commons Attribution-Share Alike 4.0 License.
Recommended Citation
INC, HP, "Flexible Architecture for HOST Off-load and Local/Edge AI Acceleration", Technical Disclosure Commons, (February 05, 2026)
https://www.tdcommons.org/dpubs_series/9280