Abstract

Running large language models (LLMs) is expensive and therefore, service providers that provide access to LLMs impose rate limits or quotas. Also, while LLMs can be improved based on human feedback, hiring and training personnel to provide ratings to LLM responses can be expensive. This disclosure proposes providing users of LLM-powered chatbots or other experiences additional query credits in response to providing scores for LLM responses to one or more prior queries. The provided query can be the user's own prior query, a query from a testing system or developer, a query from another user (with specific user permission), etc. The user-provided scores are verified based on known prior ratings and/or based on comparison with a distribution of scores provided by other users for the prior query. The techniques allow users to earn additional query credits while also providing useful information for the service provider that provides the LLM.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS