Abstract
This submission proposes a system that constructs multi-agent AI workflows dynamically at runtime by computing optimal paths through a live network topology of Large Language Model (LLM) providers, AI agents, humans, and services, the same way that routing protocol computes packet paths through routers. Considering Artificial Intelligence (AI) tokens as the new currency of intelligent automation, this submission embeds their cost into a routing metric. Through techniques of this submission, each LLM provider and AI agent's inference token cost for defined task can be used as part of weight in path computation.
When a routing algorithm builds a workflow, it can factor in, at runtime, what each step costs based on live token pricing, agent availability, load, reputation, and quality constraints, selecting optimal combination of LLM models and AI agents for each task.
Workflows are never predefined. Rather, workflows are assembled fresh from the network state at the moment of each request, automatically adapting as Large Language Model (LLM) prices change, agents join or leave, providers go offline, and workloads shift – with zero human reconfiguration. Thus, the proposed system injects AI token costs into the routing math that can build multi-agent workflows.
Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 License.
Recommended Citation
Volkov, Roman and Kudryashov, Yegor, "DYNAMIC ROUTING FOR MULTI-AGENT AI WORKFLOWS. AI TOKEN COST IN METRIC CALCULATION.", Technical Disclosure Commons, ()
https://www.tdcommons.org/dpubs_series/9964