Inventor(s)

Kenneth DavisFollow

Abstract

 

Field

Computer-implemented systems for detecting individual knowledge deficits in organizational workforces. This publication describes an approach based on structured assessment score analysis that does not require natural language processing of unstructured communications.

Background

Organizations routinely assess employee knowledge through structured instruments: multiple-choice quizzes, Likert-scale self-assessments, scored simulations, timed task completions. These instruments produce numerical score vectors. The scores can be analyzed computationally to surface knowledge deficits without ever touching a natural language processing pipeline.

Technical Description

The system receives structured assessment results. Each result contains numerical scores across predefined dimensions (for example, product accuracy scored 1 to 5, procedural compliance scored 1 to 5, regulatory knowledge scored 1 to 5) for a given employee-assessment interaction. The system persists each score vector as a structured gap record. The record schema includes: record_id (UUID), employee_id, organization_id, assessment_template_id, dimension_scores (a typed array where each entry has a dimension_name, a numeric_score, and a maximum_score), composite_score (a weighted average across all dimensions), taxonomy_domain, timestamp, and resolution_status.

Gap detection here works by threshold comparison rather than NLP classification. For each dimension in the score vector, the system checks whether the numeric score falls below a configurable per-dimension threshold. When it does, that dimension generates a gap signal. A separate composite threshold applies to the weighted average. If the composite falls below that threshold, the system generates an overall gap signal. Once created, the gap record is persisted as an independently queryable data artifact. The system then routes it to downstream consumer systems (learning management systems, manager dashboards, compliance documentation systems) through protocol-specific routing adapters.

Confidence scoring relies on four inputs: (a) how far the score fell below threshold, where a bigger shortfall means higher confidence, (b) historical trajectory for this employee-dimension pair, where a declining trend raises confidence, (c) a z-score computed against same-role cohort performance on the same assessment, and (d) an assessment difficulty coefficient derived from population-level performance statistics on the assessment template.

After a gap is identified and remediation is delivered, the system tracks subsequent assessment scores in the flagged taxonomy domain to watch for recurrence. Recurrence gets flagged when a score drops back below the original threshold inside a defined monitoring window.

The system also supports multi-modal gap signal fusion. Structured quiz scores are one input. Behavioral signals from task-completion metrics (time-to-complete, error rates, help-documentation access frequency) are another. Engagement signals from training content interaction patterns (content skip rates, assessment abandonment rates) are a third. Each source produces its own independent gap signal. These are combined through configurable weighting into a single composite gap determination.

Distinguishing Characteristics

This system operates entirely on structured numerical inputs. There is no natural language processing pipeline involved. No intent classification, no entity extraction, no taxonomy mapping from unstructured text. Gap detection is deterministic: threshold comparison produces reproducible results every time. The tradeoff is that this approach cannot detect knowledge deficits from unprompted organic communications. It only works when the employee participates in a formal, structured assessment.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS