Abstract

A system and method for implementing automated fusion of multiple recordings of an event by using video processing is disclosed. The application determines the best camera angle from the available recordings for viewing important parts of the event and allows the user to automatically generate a fused version from the several recordings. The method includes video synchronization, tracking and clustering tracked objects in the video, followed by optimum video feed inference. The method greatly reduces time and eliminates manual effort, as processes such as identifying the best quality among different versions available, sequencing various clips correctly, and fusing them seamlessly are all automated.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS