Abstract
Enforcing complex, team-specific software development rules that are beyond the scope of some static analysis tools can present challenges, and manual review for such rules may be slow and error-prone. The described techniques may address these challenges by employing automated agents powered by large language models within a code review workflow. These agents can be configured with natural language prompts that codify domain-specific knowledge and complex validation logic. When triggered by a code change, an agent can analyze the modification and its relationship to other relevant files in the codebase. This method can provide automated, inline feedback on adherence to best practices, such as verifying a configuration change against a source-of-truth file or checking if application programming interface changes are reflected in documentation. This process can aid in improving code and configuration quality while reducing the burden on human reviewers.
Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 License.
Recommended Citation
Chen, Peng, "LLM-Powered Agents for Context-Aware Validation of Code and Configurations", Technical Disclosure Commons, (December 02, 2025)
https://www.tdcommons.org/dpubs_series/8973