Inventor(s)

Kenneth DavisFollow

Abstract

Field Computer-implemented systems for detecting employee knowledge deficits by fusing gap signals from multiple independent data modalities, including voice transcripts, clickstream behavior, and learning management system interaction patterns, into a unified deficit determination. Background Any single data source for gap detection has blind spots. Natural language analysis misses employees who never ask questions. Clickstream analysis misses employees who work outside enterprise software. Assessment scores miss employees who test well but perform poorly. The approach described here combines multiple modalities to produce a more complete picture of employee competency. Technical Description The system integrates gap signals from three or more independent modalities via a fusion architecture. Each modality produces an independent signal in a standardized format: employee_id, knowledge_domain_id, signal_source, signal_strength (0.0 to 1.0), confidence (0.0 to 1.0), and timestamp. Modality 1 (Voice/ASR): Gap signals from automatic speech recognition analysis of recorded interactions (sales calls, customer service, internal meetings). The ASR module transcribes audio, and a text analysis component identifies indicators such as incorrect statements, hedging language, and deferred questions. Signal strength reflects the severity and frequency of detected indicators. Modality 2 (Clickstream/Behavioral): Gap signals from UI interaction analysis as described in a companion publication. Signal strength reflects the composite deficit score from dwell anomalies, help access frequency, feature avoidance, and navigation entropy. Modality 3 (LMS Interaction): Gap signals from learning management system engagement patterns, including course module abandonment rates (starting but not completing modules in a domain), assessment retake frequency, and content replay rates. Signal strength reflects the deviation from peer baseline engagement patterns. A temporal alignment module synchronizes signals across modalities. Because different data sources produce signals at different frequencies and latencies, the module aggregates signals into aligned evaluation windows (default: 7-day windows). Signals arriving outside the window are attributed to the nearest window boundary. A fusion engine combines aligned signals using configurable weights per modality (defaults: voice 0.35, clickstream 0.35, LMS 0.30). When signals from two or more modalities agree (both exceed their individual thresholds for the same employee-domain pair in the same window), the fusion engine computes a corroborated deficit score. When signals conflict (one modality flags a deficit but another does not), the engine applies a conflict resolution rule: the higher-confidence signal takes precedence, but the conflicting signal is logged for review. A gap record is generated when the corroborated deficit score exceeds a configurable threshold (default: 0.6) for two or more consecutive evaluation windows AND at least two modalities contributed corroborating signals. The gap record contains: employee_id, knowledge_domain_id, corroborated_score, per_modality_signals, contributing_modalities, conflict_flags, and evaluation_window_timestamps. Distinguishing Characteristics This system fuses signals from multiple independent modalities rather than relying on any single input type. No single modality must be present for the system to function. The fusion architecture produces corroborated deficit determinations that are more reliable than any single-modality detection. The temporal alignment and conflict resolution mechanisms address the practical challenge of combining heterogeneous data sources that operate at different frequencies and confidence levels.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS