Abstract

During a meeting or a call via a collaboration service (which brings together different capabilities such as video conferencing, online meetings, screen sharing, webinars, Web conferencing, and calling), a closed captions feature enables engaging and productive conversations for hard of hearing users and users with different levels of language proficiency. However, such a feature is by default turned off in collaboration meeting and calling services. Additionally, not all of the participants will be aware of such a feature. Techniques are presented herein that offer a unique cognitive algorithm for the intelligent automatic activation of a live captions feature for struggling participants. The algorithm may, in real time, examine various heuristic, audio, and video patterns (such as a participant’s geolocation, their home language, their accent, their fluency, their facial expression and body language, and optionally the context and intent of their speech) to identify the need for closed captions for the participants. Current meeting and calling solutions do not offer a mechanism to automatically detect the need for live captions. Employing the presented techniques may help promote the use of a closed captions feature and, in turn, enable much more engaging and productive conversations during a meeting or a call.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS