Abstract
This disclosure introduces an AI Autopilot Risk detection framework designed to identify when users begin relying on AI-generated outputs with reduced independent scrutiny. As AI assistants become embedded in daily workflows, repeated exposure to reliable outputs can gradually shift user behavior from active verification to passive acceptance. This subtle drift increases the likelihood that incorrect or contextually weak AI outputs propagate through operational pipelines despite the continued presence of a human in the loop. The proposed system continuously monitors human interaction signals including response latency, acceptance patterns, dwell time, and session-level engagement. These signals are synthesized into an Autopilot Risk Score (ARS) that reflects the current state of human oversight. When elevated risk is detected, the framework introduces calibrated interventions that restore appropriate human attention without materially disrupting workflow efficiency. The approach is model-agnostic and suitable for enterprise copilots, SOC environments, financial review systems, and other human-in-the-loop AI deployments.
Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 License.
Recommended Citation
Bhatnagar, Pranav Mr, "AI Autopilot Risk: Detecting When Users Stop Thinking and Start Trusting Human Over-Reliance Detection in AI-Assisted Workflows", Technical Disclosure Commons, (February 23, 2026)
https://www.tdcommons.org/dpubs_series/9382