Abstract
Conventional large language model (LLM)-based multi-agent systems suffer from a critical lack of transparency and explainability, making it difficult for users to understand why agents make specific decisions or how they interact with each other. To address these challenges, the techniques presented herein provide an Explainable LLM Multi-Agent System Analyzer that uses Bayesian networks to model agent interactions and provide an Interactive Explanation Interface with “Why” and “What If” capabilities. This system allows users to probe the multi-agent system’s decisions through natural language queries, generating explanations and simulations that reveal the underlying reasoning processes and causal relationships between agents.
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.
Recommended Citation
Morandi, Andrea, "EXPLAINABLE LARGE LANGUAGE MODEL (LLM) MULTI-AGENT SYSTEM ANALYZER WITH INTERACTIVE BAYESIAN REASONING INTERFACE", Technical Disclosure Commons, (September 24, 2025)
https://www.tdcommons.org/dpubs_series/8638