Inventor(s)

Juhyun LeeFollow

Abstract

Accessibility features for vision-impaired users are typically provided by oral readouts of on-screen icons or other material. The task of transcribing in-app icons to text for text-to-speech conversion is typically left to the app developer. Icon-to-speech translation for apps from developers that are not conscious of the need to make their products accessible is often unsatisfactory. For example, a refresh button and a back button may both simply be transcribed as “button,” which is unhelpful to a vision-impaired user.

This disclosure uses computer vision techniques to automatically infer, and provide to a vision-impaired user, text corresponding to UI elements, e.g., icons, dropdown boxes, sliders, buttons, etc. The techniques are advantageously implemented on a platform over which apps operate, e.g., operating system, browser, etc., such that vision-impaired users can access UI elements even if the app developer did not transcribe in-app icons to text.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS