Abstract

Customer-facing AI was supposed to make digital experiences faster, smoother, and more personalized. In heavily regulated environments, something more complicated is unfolding. As compliance layers accumulate around production systems, many assistants are becoming impeccably safe and increasingly frustrating to use. Responses grow longer, refusals appear in low-risk scenarios, and basic customer journeys begin to feel like they are negotiating with a legal department. This paper defines this pattern as AI Compliance Overreach (ACO) and frames it as a measurable degradation of customer experience driven by disproportionate regulatory pressure. The proposed Customer Experience Suppression Index (CESI) quantifies when guardrail intensity begins to erode interaction quality, task completion velocity, and user satisfaction signals. Rather than questioning the necessity of compliance controls, the framework evaluates proportionality between regulatory risk and customer friction. Controlled simulations across support, onboarding, and self-service environments demonstrate that elevated CESI strongly correlates with prompt repetition, abandonment behavior, and escalation rates. The findings suggest a growing blind spot in enterprise AI governance: systems can be fully compliant and still quietly damage the customer relationship.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS