Abstract
Generative AI systems are increasingly used to produce marketing copy at scale, enabling rapid content creation across advertising, product descriptions, and customer engagement channels. While these systems improve efficiency and linguistic quality, they introduce a growing compliance risk: AI-generated content may appear highly persuasive while containing exaggerated, unsupported, or contextually misleading claims. Such outputs can expose organizations to regulatory scrutiny, brand trust erosion, and downstream legal liability. This disclosure introduces a detection framework for AI-Generated Copy Risk, focused on identifying persuasive marketing language that exceeds verifiable product support or approved messaging boundaries. The proposed system analyzes claim intensity, evidentiary grounding, and policy alignment signals to compute a bounded Compliance Risk Score (CRS). The approach is model-agnostic and designed for real time integration into marketing content pipelines. By surfacing high-risk promotional language before publication, the framework enables scalable oversight of AI-assisted marketing operations.
Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 License.
Recommended Citation
Bhatnagar, Pranav Mr, "AI-Generated Copy Risk: Detecting Persuasive but Misleading Marketing Content", Technical Disclosure Commons, ()
https://www.tdcommons.org/dpubs_series/9384