Abstract

Many search applications support multimodal search where results in different modalities are delivered. In some contexts, search results may also include AI-generated content that is responsive to a user query. However, current user interfaces for search do not enable a user to easily specify their intent, e.g., web results vs. generative results; text-only, image-only, video-only results, etc. This disclosure describes an intuitive, gesture-based user interface that enables users to specify their search intent when initiating a search. The gesture may be performed with reference to a user interface element such as a call-to-action button, a virtual keyboard, a dropdown menu, etc. Individual gestures or combinations of gestures can be mapped to result modalities.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS