D ShinFollow


Viewing contextually appropriate photos from a photo library requires that a user manually perform search or browsing actions to access photos that match their context. This disclosure describes the use of machine learning techniques to automatically perform semantic matching of current user context with photos in a user’s photo library to identify photos that are contextually relevant. e.g., of a photo library application, or any other application on the device. Per techniques described herein, context information obtained with user permission is encoded to obtain a context embedding that is compared with visual embeddings for photos from the user’s photo library. The encoding is performed with a network trained using contrastive loss, such that a L2 distance between context embedding and visual embeddings is indicative of semantic match of the photo to the current context. Photos in the library are ranked based on the distance between respective visual embeddings and the context embedding. Contextually appropriate top ranked photos are then automatically displayed to the user, eliminating the need to browse or search the photo library.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.