Inventor(s)

Vikram AuradkarFollow

Abstract

This document describes a framework for simulating immersive computing environments, specifically targeting augmented reality (AR) and extended reality (XR) development. The system utilizes a dual-input architecture comprising a 3D scene file, such as a Graphics Library Transmission Format (glTF) file, and a corresponding scene information file containing semantic metadata. This metadata defines environmental elements including planar surfaces like floors and ceilings, vertical walls, and spatial anchors for immovable or movable objects. By incorporating configurable visibility conditions—including field-of-view (FOV) triggers, distance thresholds, and ambient lighting levels—the framework simulates the dynamic discovery of scene information by virtual hardware sensors that would be detected in an AR or XR environment with physical sensors. This generated environment allows developers to test spatially-aware applications and debug scene-dependent interactions within an emulator, bypassing the need for hardware sensors while maintaining high-fidelity spatial coordinates and normal vector accuracy.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS