Test cases for software are designed to provide code coverage as well as to test the code against real user behavior. Building such test cases can be challenging. Current tools to build test cases rely on macros (scripts) to simulate user behavior. Building such macros requires manually programming test cases and/or randomizing user actions during the test. This disclosure describes the use of generative artificial intelligence (AI) techniques to learn the patterns of user behavior on a website, app, or other software to be tested. For example, a large language model (LLM) can be utilized to learn the patterns. The LLM can be prompted to automatically generate test cases during the automated testing phase of the software development life cycle for the software that is to be tested. The automatically generated test cases can encapsulate specific user personas based on the set of actions in the LLM response.
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.
Dantas, Victor, "Large Language Model Powered Test Case Generation for Software Applications", Technical Disclosure Commons, (September 26, 2023)