Automated Security Policy Validation from Natural Language Documentation Using Large Language Models
Abstract
Determining security permissions from natural language documentation, for example, runbooks, can be a time-consuming and error-prone process that may lead to runtime failures or security vulnerabilities. A system can use large language models (LLMs) to analyze documentation, infer a candidate permission policy, and generate a corresponding test script. The script can then be executed in a sandboxed environment using the candidate policy. An iterative refinement loop may diagnose permission-related failures and use an LLM to propose policy modifications to address those failures. This automated process can assist in generating a functional permission policy derived from operational instructions, which may reduce configuration errors and improve alignment between documentation and system requirements.
Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 License.
Recommended Citation
Kuligin, Leonid and Ostapenko, Aleksandr, "Automated Security Policy Validation from Natural Language Documentation Using Large Language Models", Technical Disclosure Commons, (April 01, 2026)
https://www.tdcommons.org/dpubs_series/9680