Abstract

Invoking a visual search typically requires a user to open an application that supports the functionality before pointing the camera at the space or objects for which visual search is desired. Such an approach for invoking a visual search is time-consuming and cumbersome and limits the types of actions that can be supported by these capabilities. This disclosure describes techniques that enable users to invoke visual searches from their device cameras via intuitive physical gestures. Users can then take further actions on objects within view.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS