Abstract
Proposed herein is an explainability technique, referred to herein as a "Semantic Policy Summaries and Semantic Explanations" technique, for artificial intelligence (AI) -driven networking that translates complex model decisions into clear, human-readable explanations grounded in real-world network concepts. Instead of abstract outputs like feature scores, the proposed technique delivers goal-oriented justifications for AI network-related decisions/actions. The proposed technique integrates directly into a control loop, supports policy-based triggering, and logs explanations with decision metadata for audit and regulatory review. Further, the proposed technique aligns with requirements from the European Union (EU) AI Act and the proposed United States (U.S.) SAFE Innovation Act, helping operators understand, trust, and govern AI decisions in critical infrastructure.
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.
Recommended Citation
Lanov, Dennis, "SEMANTIC POLICY SUMMARIES AND SEMANTIC EXPLANATIONS FOR EXPLAINABLE AI IN NETWORKING", Technical Disclosure Commons, (July 23, 2025)
https://www.tdcommons.org/dpubs_series/8385