Inventor(s)

Akram Sheriff

Abstract

Techniques are proposed herein for deriving an adaptive and coherent agentic trust score in a multi-agentic system, which can be used to trigger a human-in-the-loop (HIL) workflow. Since not all workflows may require HIL feedback, the agent trust score can be used for decision making, in real-time, to determine whether or not HIL feedback is required for a given workflow. The techniques prioritize trust by establishing metrics across data boundaries, Application Programming Interface (API) reliability, and metadata compliance, each contributing to the agent trust score derivation. Thus, the real-time agentic scoring system proposed herein may foster confidence in Large Language Model (LLM) agents by quantifying trust elements that are essential for regulated, secure, and high-stakes applications.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS