Inventor(s)

Keun Soo YimFollow

Abstract

Software testing requires selecting the tests to perform that are appropriate for a given event, such as code change, software release, solution deployment, and so on. Manual test selection is slow, expensive, and error-prone, such that relevant tests may be left out of the test set. This disclosure presents the use of a large language model (LLM) to select the optimal set of test cases relevant for a given software code change or update. Descriptions of source code changes and dependencies along with descriptions of labeled test cases are used to prompt the LLM to select relevant test cases. Multiple iterations of the LLM prompting can be performed if the set of test cases output by the LLM exceeds available resources. If no dependency information is included in the input, the LLM can be prompted to select test cases based only on the description of code changes and test cases. In case the descriptions are unavailable or unusable, the operation can be adjusted by first prompting the LLM to produce appropriate descriptions from the raw source code or code change deltas of the software and the test cases. The output set of test cases identified by the LLM can increase the chances of finding any problems in the updated software or test cases, while minimizing the total cost of running the selected tests.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS