Abstract

The efficacy of large language models may be dependent on prompt quality, and manual prompt engineering can be an inefficient, trial-and-error process. Some existing automated optimization approaches may utilize a pre-existing seed prompt and could lack methodological guidance, potentially limiting their effectiveness. A system is described for the automated optimization of instructional prompts that can employ a closed-loop algorithmic pipeline to iteratively generate, evaluate, and refine candidate prompts. The process can be initiated, for example, without a human-provided seed prompt, instead originating initial candidates from reference data and a structured methodological framework, such as a chain-of-thought workflow. This automated and data-driven approach may reduce inefficiencies associated with manual prompt engineering and can provide a scalable method for discovering and refining effective prompts for various tasks.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS