Abstract

The agent-to-agent (A2A) protocol lacks transparent, verifiable visibility into the artificial intelligence (AI) components driving counterpart behavior, such as large language models (LLMs), auxiliary agents, Model Context Protocol (MCP) servers, and downstream tools. Assessing risk, compatibility, and compliance amid dynamic runtime changes and opaque provenance becomes difficult, forcing organizations to either over-trust or restrict utility. The disclosed techniques augment the A2A protocol with an AI Bill of Materials (AI BOM) exchange that discloses component inventories, versions, provenance, and attestations; a policy engine that evaluates BOMs to approve, scope, or deny interactions; and selective disclosure to balance transparency with privacy. Sessions are bound to approved BOMs, optionally anchored in a transparency log or blockchain for tamper-evidence, with continuous enforcement, monitoring, and auditable updates to maintain trust as systems evolve.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS