Abstract
Autonomous AI agents generate execution plans that involve delegating tasks, invoking tools, and selecting execution environments across heterogeneous and multi-provider deployments. In government and defense environments, AI artifacts handled by these agents are subject to strict privacy, sovereignty, organizational, and clearance-based policies.
Existing AI governance mechanisms enforce policy compliance at system boundaries or during execution, after an agent has already generated a plan. As a result, agents may produce execution plans that are inherently non-compliant and only fail at runtime. Current systems do not constrain the agent’s planning process itself.
The proposal introduces a policy-constrained plan generation method for autonomous AI agents, in which policy constraints associated with AI artifacts are enforced directly within the agent’s planning algorithm. Agent planning is represented as a planner graph, where candidate actions are modeled as graph transitions and policy constraints are applied as hard constraints on those transitions.
During plan generation, candidate actions that violate artifact-level policies are pruned from the planner graph before plan selection. Only policy-compliant execution paths can be generated, ensuring compliance by construction rather than through runtime denial or post-hoc audit.
This proposed approach enables autonomous AI agents operating across sovereign, on-prem, and allied environments to generate execution plans that inherently respect privacy and data-sovereignty requirements, which is critical for government and defense deployments.
Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 License.
Recommended Citation
M M, Niranjan, "Policy-Constrained Plan Generation for Autonomous AI Agents", Technical Disclosure Commons, (March 16, 2026)
https://www.tdcommons.org/dpubs_series/9536