Abstract

Software code generated via artificial intelligence (AI) models can potentially contain errors that can render AI-generated code unusable without expending significant manual effort to identify and rectify each issue. The need for human oversight can reduce the overall effectiveness of the software development and testing process. This disclosure describes automated techniques that incorporate feedback, external error information, and contextual awareness to improve the accuracy, reliability, and efficiency of AI-generated code. The iterative process involves code evaluation followed by script validation. The process generates feedback that provides clear and actionable input to the AI model to fix errors in previously generated code. The AI model type and/or model parameters can be adjusted for accuracy of the generated code and the speed and cost of code generation according to user input regarding the specific requirements of the scenario at hand. Automating the process of generating and checking code can enable precious resources of human software professionals to be allocated efficiently, improving effectiveness of the overall software development process.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS