Abstract

In current video conferencing applications, the audio of the speech of a participant is captured without any indication of the spatial positioning or body orientation of the speaker in relation to a device camera used to capture the corresponding video. Therefore, the audio experience in video conferencing lacks spatial and directional richness. This disclosure describes techniques to enhance the spatial richness of the audio in a video conference based on a user’s head orientation. With user permission, head orientation is estimated using measurements from device sensors of earbuds or another device used by a video conference participant. Head orientation measurements for the participants are used to apply appropriate positional correction to the audio using a head-related transfer function (HRTF). Implementation of the techniques can improve the spatial accuracy of the audio feed within a video conference, thus making the conversations sound more realistic.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS