Abstract
Online shopping interfaces may not fully support visual and exploratory product discovery when users lack specific vocabulary to articulate desired attributes in text-based searches. The disclosed technology relates to a system that may perform contextual analysis on a collection of displayed product results. For example, a large language model (LLM) can be used to identify relative distinguishing features among the items, in addition to or as an alternative to their absolute attributes. Based on this analysis, the system may generate dynamic, descriptive tags that can function as visual refinement points. These interactive tags can be overlaid on product images, which may enable users to refine a search by selecting a specific feature of interest. This method can facilitate a browsing experience that supports product discovery and search refinement, potentially reducing reliance on explicit textual input from the user.
Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 License.
Recommended Citation
Fedyk, Ryan, "Visual Search Refinement Based on Relative Feature Analysis in a Product Collection", Technical Disclosure Commons, (October 17, 2025)
https://www.tdcommons.org/dpubs_series/8739