Video chat in a virtual environment can include avatars that are animated to track the facial expressions of a speaker. While most chat interfaces have the capability of transporting video streams captured at a client device, most do not support transporting animation data related to animating a user’s avatar in the virtual chat. In these scenarios, conventional chat technologies may transport the entire video to enable a client device to extract data for animation, e.g., facial coordinates. This is wasteful of network bandwidth and client device battery life, and unsuitable for client devices with low computational capacity. Techniques are described herein that enable animation data and audio data to be transported over WebRTC, an established web protocol with minimal configuration changes. Animation data is extracted and packaged into a customized video stream. This video stream is of lower size as compared to a typical video stream having image and frame data therein. Audio data is transported via a conventional audio stream. Upon receipt of the animation data and audio data, client devices can provide animated avatars having synchronized audio with low resource consumption.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.