Inventor(s)

Kyle LampertFollow

Abstract

Creating animated video sequences from static images to support a narration can be a time-consuming and skill-intensive process. A described method may automate aspects of generating documentary-style motion graphics. A system can receive a static image and a narrative description as inputs. For example, a first artificial intelligence (AI) model can perform an analysis to identify relevant points of interest in the image. Subsequently, a second AI model may generate a structured motion graphics plan that defines a sequence of virtual camera movements and on-screen annotations. A rendering engine can then interpret this plan to produce a video file, which can synchronize the animated effects with an associated narration. This approach may facilitate the automated transformation of static visual aids, such as diagrams or charts, into dynamic video content, potentially reducing the need for manual video editing.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS