Abstract

Proposed herein is a novel bias-corrected multi-agent user feedback analysis system that automatically corrects for selection bias in sparse user feedback to provide accurate, unbiased quality assessments of Large Language Model (LLM) systems.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS