Abstract

Current virtual reality (VR) content creation is complex, and the resulting sensory feedback from haptic devices relies on proprietary, pre-programmed effects. This limits the availability of personalized, deeply immersive experiences and creates a fragmented hardware ecosystem. The disclosed technology describes a method for generating multi-sensory VR experiences from user prompts. It uses an artificial intelligence pipeline to generate a narrative, visual assets, and a corresponding synchronized sensory track. A key component is an open communication protocol that enables standardized communication between any VR content and various haptic hardware devices. This approach facilitates the on-demand creation of cohesive and immersive experiences, while also fostering an open ecosystem for content and hardware development.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS