Abstract
A method is described for managing multiple personas within a single large language model (LLM) session, which can address challenges where roles may merge or be omitted in the output. The method can use structured delimiters, such as XML-style tags, to differentiate persona-specific text. For example, an initial prompt can establish each persona and its unique tag, with instructions for the model to enclose its corresponding output within those tags. The delimiters may function as high-salience tokens that condition the model's attention mechanism, promoting stylistic and logical consistency for each persona. This conditioning can guide the model to assign higher attention weights to prior text segments enclosed in the same tags. The technique can facilitate iterative interaction between simulated agents within a single model instance, potentially offering benefits associated with multi-agent collaboration while reducing computational expense and architectural complexity compared to operating separate systems.
Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 License.
Recommended Citation
Phoenix, Christopher Jonathan, "Persona State Management in a Large Language Model Session Using Structured Text Delimiters", Technical Disclosure Commons, (October 15, 2025)
https://www.tdcommons.org/dpubs_series/8720