Abstract

A substantial fraction of software or application development effort is spent on testing and debugging. For mobile device software, manual quality assurance (QA) testing on devices can be a time-consuming, repetitive, and error-prone task. This disclosure describes techniques that leverage a large language model (LLM) to automate execution of test cases, to perform visual assertions on screenshots, and to identify and report bugs in the software being tested. By leveraging the capabilities of LLMs to understand natural language instructions (e.g., input prompts), generate text (e.g., based on input test cases), perform image recognition tasks and answer visual questions (e.g., to provide visual assertions based on screenshots obtained from simulation), the techniques automate repetitive tasks/feature tests, improve the accuracy of QA testing, and reduce overall time required to perform tests, providing substantial cost savings and improved software quality.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS