Abstract
This publication describes a method for the automated analysis of AI-generated code to detect potential software license violations and known security vulnerabilities prior to integration into commercial products. By leveraging datasets such as purldb and vulnerablecode from AboutCode, this approach enables the systematic evaluation of code produced by Large Language Models (LLMs), ensuring alignment with both legal compliance standards and security best practices. The system resolves risks associated with third-party dependencies and embedded code snippets by cross-referencing license constraints and vulnerability disclosures, providing developers with actionable insights to maintain secure and compliant software artifacts.
Creative Commons License
This work is licensed under a Creative Commons Attribution-Share Alike 4.0 License.
Recommended Citation
Goel, Tushar and Ombredanne, Philippe, "Ensuring AI-Generated Code Compliance and Security", Technical Disclosure Commons, (April 28, 2025)
https://www.tdcommons.org/dpubs_series/8041