Abstract

Immersive virtual reality (VR) and augmented reality (AR) environments rely on high-quality audio. To this end, VR and AR platform developers have adopted Ambisonics, which is a technique used to record, modify, and recreate a full-sphere surround sound. To render (or decode) the sound field as faithful as possible, developers often use a set of head-related transfer functions (HRTFs). However, the use of HRTFs, which are processed with truncated spherical harmonics, increases errors with increase in frequency scale. A new method for rendering spatial audio ambisonically that handles each ear independently shifts the paradigm. The conventional HRTF ambisonic encoding is modified to compute the spherical harmonic coefficients for a set of ear-related transfer functions (ERTFs), which enable an increase in sound quality encoding and rendering by using fewer ambisonic orders.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS