Inventor(s)

Phil WeaverFollow

Abstract

This publication describes an augmentative and alternative communication (AAC) user equipment (UE) that enables a user to select autocompleted sentences displayed in a user interface (UI) of the AAC UE. The AAC UE scans ambient sounds to identify speech that is being spoken in the vicinity of the user. After the AAC UE scans for speech, the AAC UE converts the audible speech into digitized speech using a speech-recognition model. Also, the AAC UE identifies the audience in the conversation with the user by employing user input (e.g., the user selects the audience), voice recognition, facial recognition, radar signature, biometric sensors (e.g., a person may scan their thumb on the AAC UE before communicating with the user), media address control identification (MAC ID) (e.g., the AAC UE can scan MAC IDs of smartphones used by the audience), radio-frequency identification (RFID) (e.g., an employee’s badge), or other sensors (e.g., in car seats). The AAC UE feeds the digitized speech and the identity of the audience into a machine-learned (ML) model, which analyzes the speech and makes suggestions on sentences that the user may want to use. The UI of the AAC UE displays the suggested sentences and waits for user input. The user reads the suggested sentences and selects the sentences that are applicable to a conversation. In case the user does not like the suggested sentences, they can use a keyboard to compose a new sentence or to modify a suggested sentence. The digitized composed speech, aided by user input, is then converted to synthesized speech. In addition, the digitized composed speech becomes an input to the ML model. Reiteratively, the ML model is updated to make a better prediction in future conversations, thus speeding up the communication process.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS