Abstract
Current techniques to transform two-dimensional to three-dimensional images can generate passable generic 3D environments based on the visual content of the original 2D image. However, such techniques may fail to accurately capture the unique location of the 2D image. This disclosure describes techniques to leverage location-specific data, e.g., Exif data, of a two-dimensional image to enhance the realism and accuracy of a three-dimensional image generated from the two-dimensional image. With user permission, location data (the latitude and the longitude) at which the image was captured is fed into a visual positioning system (VPS), which identifies the precise location at which the image was captured. Multiple viewpoints of the field-of-view from the location of the camera are searched for and gathered. The original two-dimensional image and the multiple viewpoints from the location of image capture are fed into a 2D-to-3D transformer to generate a more realistic and accurate three-dimensional image that faithfully reflects the real-world setting depicted in the original image.
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.
Recommended Citation
Hasan, Shiblee and Bryan, Kathleen Alexandra, "Expanding 2D Images into Immersive 3D Images Using Location Data", Technical Disclosure Commons, (July 29, 2024)
https://www.tdcommons.org/dpubs_series/7244