This publication describes techniques and apparatuses that enable an electronic device (e.g., a smartphone) with at least one camera to capture, render, and/or process frames. As the smartphone captures frames of a scene, the smartphone determines frames to be rendered and/or processed. Initially, the smartphone performs a global (fast) motion estimation to correctly render and/or process the captured frames of the scene. Information from the global motion estimation enables the smartphone to merge the captured frames, and the smartphone can generate one merged frame or a set of merged frame candidates. Information from the global motion estimation also enables the smartphone to perform target (object) detection on the merged frame. If the smartphone fails to detect a target (e.g., inside a target-detection box), the smartphone proceeds to render and/or process a next merged frame of the set of the merged frame candidates. If the smartphone, however, detects a target inside the target-detection box, the smartphone uses the necessary resources to perform local (accurate, detailed) motion estimation inside the target-detection box. The smartphone may use the information from the local motion estimation to merge the target, generate a local frame patch, and/or verify the target. The local motion estimation helps reduce or remove undesired artifacts (e.g., ghosting). Lastly, the smartphone projects the location of the target to a corresponding merged frame to generate a resulting frame with increased signal and/or increased signal-to-noise ratio (SNR).

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.