Abstract

A system is proposed herein that provides for the ability to automate the real-time conversion of spoken ideas into visual diagrams, thereby dynamically creating and updating visuals, such as flowcharts, organizational charts, or the like during online collaboration sessions, such as online meetings or the like. The proposed system integrates an Automatic Speech Recognition (ASR) Engine with Large Language Model (LLM) -based semantic parsing, live rendering, and persistent memory to ensure multi-modal collaboration and continuity across collaboration sessions. By reducing the cognitive load on participants and eliminating manual diagram management, the proposed system accelerates the design and documentation of complex processes and workflows.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS