Inventor(s)

Abstract

Developing multi-agent artificial intelligence (AI) systems for regulated domains, such as finance, can introduce security challenges where a flaw in one component of a monolithic application could potentially compromise the system. A multi-agent architecture can be configured that utilizes a coordinator-sub-agent model to create security boundaries through multi-layered isolation. This approach can involve deploying each agent as a separate application binary running in its own process, assigning each binary a unique service account identity with least-privilege permissions, and enforcing namespaced session isolation so agents may access their own private data within a shared session store, for example, a distributed cache or database. This design can help contain the impact of a potential compromise to a single component, thereby helping to limit access to the data or capabilities of other agents and facilitating a modular framework for developing complex conversational AI.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS