Inventor(s)

Andrea Morandi

Abstract

Conventional large language model (LLM)-based multi-agent systems suffer from a critical lack of transparency and explainability, making it difficult for users to understand why agents make specific decisions or how they interact with each other. To address these challenges, the techniques presented herein provide an Explainable LLM Multi-Agent System Analyzer that uses Bayesian networks to model agent interactions and provide an Interactive Explanation Interface with “Why” and “What If” capabilities. This system allows users to probe the multi-agent system’s decisions through natural language queries, generating explanations and simulations that reveal the underlying reasoning processes and causal relationships between agents.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS