Abstract
Current software testing methods and metrics do not provide a comprehensive, objective, and automated assessment of the UX in real time, thus making it challenging to be proactive about identifying and addressing usability issues that have a negative impact on the UX and business goals. This disclosure describes techniques that leverage the capabilities of large language models (LLMs) to provide a deep, comprehensive, and nuanced understanding of the UX of a software solution that takes into account the user’s intent and context. Preprocessed data about interaction analytics and contextual information obtained with user permission can be analyzed via an LLM that is trained and fine-tuned to infer user intent, classify the UX, and identify potential usability issues. The LLM output can be translated into quantifiable normalized UX metrics and displayed to human analysts on a user-friendly dashboard. The LLM output can be used to generate actionable suggestions to enhance the software by addressing any usability problems that are detected. The proactive approach, with human analysts in the loop as appropriate, can enable rapid identification and resolution of UX issues at scale.
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.
Recommended Citation
Start, Johannes and Lunney, John, "Optimizing User Experience by Continual Automated Assessment of Usability Metrics Obtained Using a Large Language Model", Technical Disclosure Commons, (August 05, 2025)
https://www.tdcommons.org/dpubs_series/8430