Abstract
On-screen user interfaces for interactive narratives may disrupt immersion and can lack mechanisms for persistent user engagement after a story concludes. A multi-device architecture may be used where a primary display presents cinematic content while a synchronized secondary device, such as a smartphone, smart watch, or other wearable device, manages diegetic user interactions via voice, motion, or touch. A remote server can orchestrate the experience and log user choices. This interaction data can be used to create a profile that contextually initializes a persistent artificial intelligence (AI) companion, seeding it with a memory of the user's specific narrative journey. This approach may reduce disruption by separating interaction from the primary display, facilitate long-term engagement through a context-aware AI character, and provide a structural method for content protection.
Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 License.
Recommended Citation
Tian, Qingfei, "Multi-Device Interactive Narrative System With an AI Companion Initialized From Narrative Interaction Data", Technical Disclosure Commons, (April 06, 2026)
https://www.tdcommons.org/dpubs_series/9709