Abstract

Degradation of performance between different release versions of a software is termed as regression. Traditional reliability testing and benchmarking tools can detect regressions of large magnitudes much more easily compared to those with smaller regression effects. As code changes accumulate over time, the cumulative impact of undetected micro regressions can add up to noticeable negative impact on performance. This disclosure describes techniques for timely detection of micro regressions based on static analysis of code changes in a change request and by targeted dynamic benchmarking. The static analyses can be performed by comparing the AST and/or call graphs of the software before and after changes connected to a change request. The results of the comparison can be employed to detect any small or large regression resulting from the changes via bytecode injected binaries of the software in a laboratory testing environment. The approach can save substantial time and effort in detecting and addressing small regressions, thus helping speed up the application as well as the software development pipeline and avoiding negative impacts on user engagement.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS