Abstract

Language instruction and speech therapy are challenging endeavors. The different challenges that are associated with such endeavors have been exacerbated by the impact of coronavirus disease 2019 (COVID-19), where, for example, language instruction and speech therapy are increasingly taking place through teleconferences. To address the types of challenges that were described above, techniques are presented herein that encompass the combination of a phonetic understanding of a speaker's utterance (as understood by, for example, a speech transcription engine coupled with knowledge of a speaker's native language) with a sentiment analysis of a native and/or non-native speaker (obtained by, for example, performing facial recognition) to identify when a sound production problem is material. Such sentiment analysis can provide for the automated delivery of real-time feedback to language learners and speech therapy patients to help them effectively target the sounds that they are trying to produce. Aspects of the presented techniques may include displaying notifications (e.g., textual hints, pictures, video clips, etc.) to session participants to provide real-time feedback within collaboration systems.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS