Abstract

Selecting a database service can be challenging when decisions rely on static information or generalized benchmarks, which may not align with an application's specific runtime performance needs, potentially leading to implementation risks. A system can provide automated, performance-validated database recommendations. The system can translate a user's non-functional requirements into a profile used to programmatically provision temporary test environments for candidate databases. A workload generator can then simulate a user's specific application load within these environments, which may allow the system to harvest real-time performance and cost metrics. The collected empirical data can be processed by a scoring algorithm to calculate a cost-adjusted performance score. This process can provide a data-driven justification for selecting a database configuration that is empirically validated to be compatible with a user's performance and budgetary requirements.

Publication Date

2026-01-07

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS