Abstract

This publication describes techniques for deblurring faces in images by utilizing multi-camera (e.g., dual-camera) fusion processes. In the techniques, multiple cameras of a computing device (e.g., wide-angle camera, an ultrawide-angle camera) concurrently capture a scene. A multi-camera fusion technique is utilized to fuse the captured images together to generate an image with increased sharpness while preserving the brightness of the scene and other details under a motion scene. The images are processed by a Deblur Module, which includes an optical flow machine-learned model for generating a warped ultrawide-angle image, a subject mask trained to identify and mask faces detected in the wide-angle image, and an occlusion mask for handling occlusion artifacts. The warped ultrawide-angle image, the raw wide-angle image (with blurred faces), the sharp ultrawide-angle image, the subject mask, and the occlusion map are then stacked and merged (fused) using a machine-learning model to output a sharp image without the presence of motion blur. This publication further describes techniques utilizing adaptive multi-streaming to optimize power consumption and dual camera usage on computing devices.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS