Abstract
Localization processes ensure that devices and software function correctly and consistently across different languages and for users of different cultures. Internationalization (i18n) testing is an important component of the localization process. Traditional i18n testing techniques are time-consuming, expensive, and difficult to scale since such testing involves substantial manual work. Furthermore, the tests can be of variable quality. This disclosure describes the use of a large language model (LLM) to automatically generate test cases for various internationalization (i18n) scenarios and to validate results generated by the software under test. Test cases are automatically generated by using a LLM to translate to multiple languages, and results generated by the software are evaluated using the LLM by providing the LLM with appropriate prompts to determine if the result is correct. A multimodal LLM or modality-specific models can be used to perform tests for different content formats, including text, audio, and image, etc. Scalable test automation with improved accuracy and reliability can be achieved using these techniques.
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.
Recommended Citation
Wang, Tracy and Dai, Wendy, "Automated Internationalization Testing Using a Large Language Model", Technical Disclosure Commons, (September 11, 2024)
https://www.tdcommons.org/dpubs_series/7340