Abstract
Large language models currently require extensive and nuanced textual prompting to achieve specific styles, tones, or narrative structures in generated content. Conveying these attributes through text alone is often difficult and requires iterative refinement by the user.
This disclosure describes a method for defining the style and narrative of generated content using non-textual inputs. A library of predefined personas, each associated with specific stylistic attributes and vocabulary, is provided for selection. Upon selection, the persona is represented by a visual color gradient and a summary of its core attributes. The generated output is further enhanced with color-coded visual feedback, where text segments are highlighted to represent different narrative tones. Users can interactively adjust these gradients via sliders to modify the balance of positive or negative content.
This approach simplifies the steering of model outputs and provides immediate visual cues for content consumption, reducing the need for repetitive manual prompting.
Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 License.
Recommended Citation
Khan, Gulmohar, "Persona-Based Style Selection and Visual Feedback for Large Language Model Outputs", Technical Disclosure Commons, ()
https://www.tdcommons.org/dpubs_series/10105