Both server-based and client-based systems can be used for precisely detecting orientation of a user device. Existing sets of geographically referenced imagery include identifiable text, and the same text may appear within multiple images within the set where the multiple images are taken from different viewpoints and camera angles. Multiple observations of the same physical text in the world are used to triangulate measurements and create a 3-dimensional (3D) location and direction for a specific text. The 3D location for the specific text is used to create a spatially indexed 3D text database, which can be located at a server and/or downloaded onto the client device. As a client device captures an image, character recognition is performed on the image, and the recognized text is compared to the 3D text database to find database text that matches the recognized text in the image. Once one or more text pieces have been matched, a triangulation computation can derive the device's orientation and location.
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.
Reinhardt, Tilman; Pack, Jeremy; Hutchison, Allen; Filip, Daniel; and Brown, Brian, "DETERMINATION OF DEVICE POSE USING TEXT MATCHING FROM CAPTURED IMAGES", Technical Disclosure Commons, (October 05, 2017)