The present disclosure describes a mixed reality capture (MRC) system that displays entire face of a user to convey the user’s expressions and reactions in a mixed reality (MR) environment. The MRC system includes the MR environment in which a number of users are interacting with each other or are immersed in a virtual scene. The MRC system deploys two methods: a video capture method within a mixed reality (MR) headset and a method for compositing facial areas (obscured due to the MR headset), captured by the video capture method, into an MRC composition system. The video capture method employs a plurality of cameras, which are mounted within a headset eyebox of the MR headset. The plurality of cameras captures a video of the obscured facial areas. The video of the obscured facial areas is projected as a rendered surface in the MR environment at a correct location and a correct orientation. The location and the orientation of the rendered surface are such that the rendered surface is in a complete synchronization with location and orientation of the MR headset. In other words, the rendered surface moves accordingly with changes in the location and the orientation of the MR headset. The MR headset of each of the users includes an MRC camera and a virtual camera. The rendered surface captured by the virtual camera and the MR environment captured by the MRC camera are provided as inputs to the MRC composition system. The MRC composition system composites the rendered surface with the MR environment to generate an MRC output frame. The MRC output frame is further displayed on a display device associated with the MR headset of each of the users.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.