This article outlines a novel approach to the personalization of large language model (LLM) outputs to an individual writer in an artificial intelligence (AI)-assisted writing application without the need for fine-tuning or prompt-engineering. With this approach, an individual’s writing style may be encoded through a compact, learned model which maps writing samples to an "author-embedding" which may be prepended to the input of an LLM (in the manner of prefix-tuning) to steer the model to generate content in the writing style of that individual. The presented techniques involve several processing steps, including the selection of an optimal subset of an author’s writing samples, the training of an author embedding model, and the use of author-embeddings as a prefix to an LLM.
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.
A Hutton, Elizabeth, "AUTHOR-SPECIFIC PREFIX-TUNING FOR PERSONALIZATION OF LARGE LANGUAGE MODELS", Technical Disclosure Commons, (August 31, 2023)