Abstract
This document describes an automated process to evaluate the impact of new software updates, also referred to as software deployments, binary releases, application builds, or canary testing, by comparing new updates against historical data rather than a standard control group. Traditional A/B testing or split testing during routine software rollouts, including continuous integration and continuous delivery (CI/CD) deployments, frequently misidentifies normal statistical noise as a failure or software regression because the update process inherently biases the testing environment. During a new software release, the computing device dynamically sets a minimum number of users from which to collect data and a maximum limit on performance differences from historical software releases. This automated release gating process prevents engineering teams from investigating false-positive errors, thereby accelerating the software release cycle and ensuring quality assurance.
Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 License.
Recommended Citation
Knab, Brian; Nusenovich, Pablo; Percival, Dancsi; Lu, Xiaoqi; Wang, Ying; and Lemon, Alex, "META-ANALYSIS FOR SOFTWARE RELEASES", Technical Disclosure Commons, ()
https://www.tdcommons.org/dpubs_series/9943