Abstract
This disclosure describes a method and wearable system for generating synchronized synthetic images from heterogeneous cameras, including a global-shutter (GS) camera and a rolling-shutter (RS) camera, within a simulated environment. GS cameras capture all pixels simultaneously, while RS cameras capture images sequentially row by row, causing different rows to correspond to different capture times and making synchronization difficult. The disclosed method addresses this challenge by reconstructing a rolling-shutter image using multiple global-shutter images generated at calculated timestamps. Using rolling-shutter parameters such as exposure time and readout time, the system computes row-level timestamps relative to a center-of-exposure (COE) reference time associated with the GS camera image. Corresponding rows are extracted from global-shutter images rendered at these timestamps and sequentially combined to reconstruct a rolling-shutter image that reflects the temporal characteristics of real RS sensors while remaining synchronized with the GS camera image. This approach enables realistic simulation of heterogeneous camera systems and supports development and testing of perception algorithms in virtual environments prior to deployment on physical devices.
Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 License.
Recommended Citation
Huai, Zheng; Rahman, Sazzadur; and Guo, Chao, "Synchronized Synthetic Global-Shutter and Rolling-Shutter Camera Images", Technical Disclosure Commons, (March 16, 2026)
https://www.tdcommons.org/dpubs_series/9531