Test engineering, including development of new test cases from large volumes of log data, can be expensive. User reports often do not include clear description of the context of poor user experience. The effectiveness of a test case derived from user reports is hard to validate. This disclosure describes the use of machine learning techniques such as attention based neural networks, language models, etc. to generate from user reports realistic, multi-faceted test sequences designed to trigger failures. A pattern extractor extracts critical events relevant to particular failures from session logs. A test sequence generator generates user-like action sequences or event sequences based on patterns extracted by the pattern extractor. A test sequence validator determines if the generated test sequences are close to user-like behavior, and likely to trigger the expected failures.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.