Abstract

Hybrid and augmented workflows involving predictions or insights produced by automation tools that are handed over to human operators are known to cause cognitive overload. Generally, cognitive overload occurs when an automated system tries to push too much information to a human operator. When such a push of information is sustained over time, cognitive overload leads to what is known as "alert fatigue" whereby insights of an automated system are not utilized, which can lead to poor adoption. One type of cognitive overload specific to cognitive systems includes situations in which predictions/insights are not necessarily numerous but rather too complex understand and interpret. The lack of ability to understand reasons behind predictions can be a barrier to a broader adoption of artificial intelligence (AI) operations. Presented herein is a novel technique to derive explanations for predictions using multiple contexts, which can help system users to rapidly estimate the importance of predictions from several angles, thereby leading to greater trust and system adoption, as well as improved reaction time.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS