Abstract
The multi-sensory features of smart devices and AR/VR products can be difficult to test holistically under an accurate simulation of human-device interactions. Artificial intelligence (AI) companions that are onboard smart devices further add to testing complexity. This disclosure describes techniques for testing AI interactions with smart devices by leveraging an automated testing platform with displays, speakers, and robotic arms to simulate natural human input. A 3D-printed proxy human head equipped with ear-microphones, eye-cameras, mouth-speaker, and other sensors simulates human interaction with the device-under-test (and its onboard AI companion) while collecting performance data. End-to-end testing and simulation of human interaction with AI companions is achieved by displaying a visual cue or object to the device-under-test (DUT); positioning the proxy human head to mimic natural human behavior; activating the DUT using controlled, realistic human inputs; and capturing the response of the DUT using sensors onboard the human proxy head. The performance of the DUT can be characterized using response time; the accuracy and keyword-relevance of response images; the accuracy and keyword-relevance of voice responses; etc.
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.
Recommended Citation
Chen, Yang; Lu, Chen; Yuan, Sean; Guo, Chao; and Jia, Luke, "Automated End-to-end Testing of Artificial Intelligence (AI) Functionalities of Smart Devices", Technical Disclosure Commons, (January 17, 2025)
https://www.tdcommons.org/dpubs_series/7736