Hai HuangFollow


Microbenchmarking is widely used to track performance regressions between versions of a software method, application programming interface (API), or component, and to detect and to prevent changes that negatively affect the performance or scalability. Microbenchmarking is vulnerable to run-to-run variance whereby multiple executions of a microbenchmark lead to different performance results. While larger numbers of iterations can be used to reduce variance, this imposes higher execution costs and can delay the release of software to production environments.

This disclosure describes a novel approach to execute microbenchmarks that utilizes the observation that a microbenchmark test usually includes multiple APIs. In the proposed approach, the iterations of each API are divided into multiple chunks. Each chunk is executed in one-shot. The execution of the chunks from one API is interleaved with that of chunks from one or more other APIs. The approach mimics real production environments, and can enable a user to provide input to control the number and order of chunks, e.g., based on production or synthetic data, and can produce multiple sample points for each API tested. The interleaving approach can reduce the variance in test results without increasing the execution cost.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.